Skip to main content

Full text of "Robust Control, Theory and Applications"

See other formats


ROBUST CONTROL, 
THEORY AND APPLICATIONS 



Edited by Andrzej Bartoszewicz 



INTECHWEB.ORG 



Robust Control, Theory and Applications 

Edited by Andrzej Bartoszewicz 



Published by InTech 

Janeza Trdine 9, 51000 Rijeka, Croatia 

Copyright © 2011 InTech 

All chapters are Open Access articles distributed under the Creative Commons 
Non Commercial Share Alike Attribution 3.0 license, which permits to copy, 
distribute, transmit, and adapt the work in any medium, so long as the original 
work is properly cited. After this work has been published by InTech, authors 
have the right to republish it, in whole or part, in any publication of which they 
are the author, and to make other personal use of the work. Any republication, 
referencing or personal use of the work must explicitly identify the original source. 

Statements and opinions expressed in the chapters are these of the individual contributors 
and not necessarily those of the editors or publisher. No responsibility is accepted 
for the accuracy of information contained in the published articles. The publisher 
assumes no responsibility for any damage or injury to persons or property arising out 
of the use of any materials, instructions, methods or ideas contained in the book. 

Publishing Process Manager Katarina Lovrecic 

Technical Editor Teodora Smiljanic 

Cover Designer Martina Sirotic 

Image Copyright buriy, 2010. Used under license from Shutterstock.com 

First published March, 2011 
Printed in India 

A free online edition of this book is available at www.intechopen.com 
Additional hard copies can be obtained from orders@intechweb.org 



Robust Control, Theory and Applications, Edited by Andrzej Bartoszewicz 

p. cm. 
ISBN 978-953-307-229-6 



OPEN ACCESS 
PUBLISHER 



INTECH 
INTECHopen 

free online editions of InTech 
Books and Journals can be found at 
www.intechopen.com 



Contents 



Preface XI 
Part 1 Fundamental Issues in Robust Control 1 

Chapter 1 Introduction to Robust Control Techniques 3 

Khaled Halbaoui, Djamel Boukhetala and Fares Boudjema 

Chapter 2 Robust Control of Hybrid Systems 25 

Khaled Halbaoui, Djamel Boukhetala and Fares Boudjema 

Chapter 3 Robust Stability and Control of Linear Interval 

Parameter Systems Using Quantitative (State Space) 
and Qualitative (Ecological) Perspectives 43 

Rama K. Yedavalli and Nagini Devarakonda 

Part 2 H-infinity Control 67 

Chapter 4 Robust H, PID Controller Design Via 
LMI Solution of Dissipative Integral 
Backstepping with State Feedback Synthesis 69 

Endra Joelianto 

Chapter 5 Robust H w Tracking Control of Stochastic 
Innate Immune System Under Noises 89 

Bor-Sen Chen, Chia-Hung Chang and Yung-Jen Chuang 

Chapter 6 Robust H w Reliable Control of Uncertain Switched 
Nonlinear Systems with Time-varying Delay 117 

Ronghao Wang, Jianchun Xing, Ping Wang, 
Qiliang Yang and Zhengrong Xiang 

Part 3 Sliding Mode Control 139 

Chapter 7 Optimal Sliding Mode Control for a Class of Uncertain 

Nonlinear Systems Based on Feedback Linearization 141 

Hai-Ping Pang and Qing Yang 



VI Contents 



Chapter 8 Robust Delay-Independent/Dependent 

Stabilization of Uncertain Time-Delay Systems 
by Variable Structure Control 163 

Elbrous M. Jafarov 

Chapter 9 A Robust Reinforcement Learning System 
Using Concept of Sliding Mode Control 
for Unknown Nonlinear Dynamical System 197 

Masanao Obayashi, Norihiro Nakahara, Katsumi Yamada, 
Takashi Kuremoto, Kunikazu Kobayashi and Liangbing Feng 

Part 4 Selected Trends in Robust Control Theory 215 

Chapter 10 Robust Controller Design: New Approaches 
in the Time and the Frequency Domains 217 

Vojtech Vesely, Danica Rosinova and Alena Kozakova 

Chapter 11 Robust Stabilization and Discretized PID Control 243 

Yoshifumi Okuyama 

Chapter 12 Simple Robust Normalized PI Control 

for Controlled Objects with One-order Modelling Error 261 

Makoto Katoh 

Chapter 13 Passive Fault Tolerant Control 283 

M. Benosman 

Chapter 14 Design Principles of Active Robust 
Fault Tolerant Control Systems 309 

Anna Filasova and Dusan Krokavec 



Chapter 15 Robust Model Predictive Control for Time Delayed 

Systems with Optimizing Targets and Zone Control 339 

Alejandro H. Gonzalez and Darci Odloak 

Chapter 16 Robust Fuzzy Control of Parametric Uncertain 

Nonlinear Systems Using Robust Reliability Method 371 

Shuxiang Guo 

Chapter 17 A Frequency Domain Quantitative Technique 
for Robust Control System Design 391 

Jose Luis Guzman, Jose Carlos Moreno, Manuel Berenguel, 
Francisco Rodriguez and Julian Sanchez-Hermosilla 

Chapter 18 Consensuability Conditions of Multi Agent 

Systems with Varying Interconnection Topology 
and Different Kinds of Node Dynamics 423 

Sabato Manfredi 



Contents VII 



Chapter 19 On Stabilizability and Detectability 
of Variational Control Systems 441 

Bogdan Sasu and Adina Luminita Sasu 

Chapter 20 Robust Linear Control of Nonlinear Flat Systems 455 

Hebertt Sira- Ramirez, John Cortes- Romero 
and Alberto Luviano-Juarez 

Part 5 Robust Control Applications 477 

Chapter 21 Passive Robust Control for Internet-Based 
Time-Delay Switching Systems 479 

Hao Zhang and Huaicheng Yan 

Chapter 22 Robust Control of the Two-mass Drive 

System Using Model Predictive Control 489 

Krzysztof Szabat, Teresa Ortowska-Kowalska and Piotr Serkies 

Chapter 23 Robust Current Controller Considering Position 
Estimation Error for Position Sensor-less Control 
of Interior Permanent Magnet Synchronous 
Motors under High-speed Drives 507 

Masaru Hasegawa and Keiju Matsui 

Chapter 24 Robust Algorithms Applied 

for Shunt Power Quality Conditioning Devices 523 

Joao Marcos Kanieski, Hilton Abilio Grundling and Rafael Cardoso 

Chapter 25 Robust Bilateral Control for Teleoperation System with 
Communication Time Delay - Application to DSD Robotic 
Forceps for Minimally Invasive Surgery - 543 

Chiharu Ishii 

Chapter 26 Robust Vehicle Stability Control Based 
on Sideslip Angle Estimation 561 

Haiping Du and Nong Zhang 

Chapter 27 QFT Robust Control 

of Wastewater Treatment Processes 577 

Marian Barbu and Sergiu Caraman 

Chapter 28 Control of a Simple Constrained 

MIMO System with Steady-state Optimization 603 

Frantisek Dusek and Daniel Hone 

Chapter 29 Robust Inverse Filter Design Based 
on Energy Density Control 619 

Junho Lee and Young-Cheol Park 



VIII Contents 



Chapter 30 Robust Control Approach for Combating 
the Bullwhip Effect in Periodic-Review 
Inventory Systems with Variable Lead-Time 635 

Przemystaw Ignaciuk and Andrzej Bartoszewicz 

Chapter 31 Robust Control Approaches 

for Synchronization of Biochemical Oscillators 655 

Hector Puebla, Rogelio Hernandez Suarez, 

Eliseo Hernandez Martinez and Margarita M. Gonzalez-Brambila 



Preface 



The main purpose of control engineering is to steer the regulated plant in such a way 
that it operates in a required manner. The desirable performance of the plant should 
be obtained despite the unpredictable influence of the environment on all parts of the 
control system, including the plant itself, and no matter if the system designer knows 
precisely all the parameters of the plant. Even though the parameters may change with 
time, load and external circumstances, still the system should preserve its nominal 
properties and ensure the required behaviour of the plant. In other words, the princi- 
pal objective of control engineering is to design control (or regulation) systems which 
are robust with respect to external disturbances and modelling uncertainty. This ob- 
jective may very well be obtained in a number of ways which are discussed in this 
monograph. 

The monograph is divided into five sections. In section 1 some principal issues of the 
field are presented. That section begins with a general introduction presenting well 
developed robust control techniques, then discusses the problem of robust hybrid con- 
trol and concludes with some new insights into stability and control of linear interval 
parameter plants. These insights are made both from an engineering (quantitative) 
perspective and from the population (community) ecology point of view. The next two 
sections, i.e. section 2 and section 3 are devoted to new results in the framework of two 
important robust control techniques, namely: H-infmity and sliding mode control. The 
two control concepts are quite different from each other, however both are nowadays 
very well grounded theoretically, verified experimentally, and both are regarded as 
fundamental design techniques in modern control theory. Section 4 presents various 
other significant developments in the theory of robust control. It begins with three 
contributions related to the design of continuous and discrete time robust proportional 
integral derivative controllers. Next, the section discusses selected problems in pas- 
sive and active fault tolerant control, and presents some important issues of robust 
model predictive and fuzzy control. Recent developments in quantitative feedback 
theory, stabilizability and detectability of variational control systems, control of multi 
agent systems and control of flat systems are also the topics considered in the same 
section. The monograph is concerned not only with a wide spectrum of theoretical 
issues in robust control domain, but it also demonstrates a number of successful, re- 
cent engineering and non-engineering applications of the theory. These are described 
in section 5 and include internet based switching control, and applications of robust 



XII Preface 



control techniques in electric drives, power electronics, bilateral teleoperation systems, 
automotive industry, wastewater treatment, thermostatic baths, multi-channel sound 
reproduction systems, inventory management and biological processes. 

In conclusion, the main objective of this monograph is to present a broad range of well 
worked out, recent theoretical and application studies in the field of robust control 
system analysis and design. We believe, that thanks to the authors and to the Intech 
Open Access Publisher, this ambitious objective has been successfully accomplished. 
The editor and authors truly hope that the result of this joint effort will be of signifi- 
cant interest to the control community and that the contributions presented here will 
advance the progress in the field, and motivate and encourage new ideas and solutions 
in the robust control area. 



Andrzej Bartoszewicz 

Institute of Automatic Control, 

Technical University of Lodz 

Poland 



Parti 
Fundamental Issues in Robust Control 



1 



Introduction to Robust Control Techniques 

Khaled Halbaoui 1 ' 2 , Djamel Boukhetala 2 and Fares Boudjema 2 

2 Power Electronics Laboratory, Nuclear Research Centre of Brine CRNB, 

BP 180 Ainoussera 1 7200, Djelfa 

2 Laboratoire de Commande des Processus, ENSP, 

10 avenue Pasteur, Hassan Badi, BP 182 El-Harrach 

Algeria 



1. Introduction 

The theory of "Robust" Linear Control Systems has grown remarkably over the past ten 
years. Its popularity is now spreading over the industrial environment where it is an 
invaluable tool for analysis and design of servo systems. This rapid penetration is due to 
two major advantages: its applied nature and its relevance to practical problems of 
automation engineer. 

To appreciate the originality and interest of robust control tools, let us recall that a control 
has two essential functions: 

• shaping the response of the servo system to give it the desired behaviour, 

• maintaining this behaviour from the fluctuations that affect the system during 
operation (wind gusts for aircraft, wear for a mechanical system, configuration change 
to a robot.). 

This second requirement is termed "robustness to uncertainty". It is critical to the reliability 

of the servo system. Indeed, control is typically designed from an idealized and simplified 

model of the real system. 

To function properly, it must be robust to the imperfections of the model, i.e. the 

discrepancies between the model and the real system, the excesses of physical parameters 

and the external disturbances. 

The main advantage of robust control techniques is to generate control laws that satisfy the 

two requirements mentioned above. More specifically, given a specification of desired 

behaviour and frequency estimates of the magnitude of uncertainty, the theory evaluates the 

feasibility, produces a suitable control law, and provides a guaranty on the range of validity 

of this control law (strength). This combined approach is systematic and very general. In 

particular, it is directly applicable to Multiple-Input Multiple Output systems. 

To some extent, the theory of Robust Automatic Control reconciles dominant frequency 

(Bode, Nyquist, PID) and the Automatic Modern dominated state variables (Linear 

Quadratic Control, Kalman). 

It indeed combines the best of both. From Automatic Classic, it borrows the richness of the 

frequency analysis systems. This framework is particularly conducive to the specification of 

performance objectives (quality of monitoring or regulation), of band-width and of 

robustness. From Automatic Modern, it inherits the simplicity and power of synthesis 



Robust Control, Theory and Applications 



methods by the state variables of enslavement. Through these systematic synthesis tools, the 
engineer can now impose complex frequency specifications and direct access to a diagnostic 
feasibility and appropriate control law. He can concentrate on finding the best compromise 
and analyze the limitations of his system. 

This chapter is an introduction to the techniques of Robust Control. Since this area is still 
evolving, we will mainly seek to provide a state of the art with emphasis on methods 
already proven and the underlying philosophy. For simplicity, we restrict to linear time 
invariant systems (linear time-invariant, LTI) continuous time. Finally, to remain true to the 
practice of this theory, we will focus on implementation rather than on mathematical and 
historical aspects of the theory. 

2. Basic concepts 

The control theory is concerned with influencing systems to realize that certain output 
quantities take a desired course. These can be technical systems, like heating a room with 
output temperature, a boat with the output quantities heading and speed, or a power plant 
with the output electrical power. These systems may well be social, chemical or biological, as, 
for example, the system of national economy with the output rate of inflation. The nature of the 
system does not matter. Only the dynamic behaviour is of great importance to the control 
engineer. We can describe this behaviour by differential equations, difference equations or 
other functional equations. In classical control theory, which focuses on technical systems, 
the system that will be influenced is called the [controlled) plant. 

In which kinds in manners can we influence the system? Each system is composed not only 
of output quantities, but as well of input quantities. For the heating of a room, this, for 
example, will be the position of the valve, for the boat the power of the engine and angle of 
the rudder. These input variables have to be adjusted in a manner that the output variables 
take the desired course, and they are called actuating variables. In addition to the actuating 
variables, the disturbance variables affect the system, too. For instance, a heating system, 
where the temperature will be influenced by the number of people in the room or an open 
window, or a boat, whose course will be affected by water currents. 

The desired course of output variables is defined by the reference variables. They can be 
defined by operator, but they can also be defined by another system. For example, the 
autopilot of an aircraft calculates the reference values for altitude, the course, and the speed 
of the plane. But we do not discuss the generation of reference variables here. In the 
following, we take for them for granted. Just take into account that the reference variables 
do not necessarily have to be constant; they can also be time-varying. 

Of which information do have we need to calculate the actuating variables to make the 
output variables of the system follow the variables of reference? Clearly the reference values 
for the output quantities, the behavior of the plant and the time-dependent behavior of the 
disturbance variables must be known. With this information, one can theoretically calculate 
the values of the actuating variables, which will then affect the system in a way that the 
output quantities will follow the desired course. This is the principle of a steering mechanism 
(Fig. 1). The input variable of the steering mechanism is the reference variable co , its output 
quantity actuating variable u , which again - with disturbance variable w forms the input 
value of the plant, y represents the output value of the system. 

The disadvantage of this method is obvious. If the behavior of the plant is not in accordance 
with the assumptions which we made about it, or if unforeseen disruptions, then the 



Introduction to Robust Control Techniques 



quantities of output will not continue to follow the desired course. A steering mechanism 
cannot react to this deviation, because it does not know the output quantity of the plant. 






Steering 



;j-^C>-> ^ Plant ^ ~K$ 



Fig. 1. Principle of a steering mechanism 

A improvement which can immediately be made is the principle of an (automatic) control 
(Fig. 2). Inside the automatic check, the reference variable co is compared with the 
measured output variable of the plant y (control variable), and a suitable output quantity of 
the controller u (actuating variable) are calculated inside the control unit of the difference Ay 
(control error). 

During old time the control unit itself was called the controller, but the modern controllers, 
including, between others, the adaptive controllers (Boukhetala et al., 2006), show a 
structure where the calculation of the difference between the actual and wished output 
value and the calculations of the control algorithm cannot be distinguished in the way just 
described. For this reason, the tendency today is towards giving the name controller to the 
section in which the variable of release is obtained starting from the reference variable and 
the measured control variable. 



\J— ► Controller ►CJ) — ► Actuator p> Process — ►Of-^ 



Metering Element 



j 



Fig. 2. Elements of a control loop 

The quantity u is usually given as low-power signal, for example as a digital signal. But with 
low power, it is not possible to tack against a physical process. How, for example, could be a 
boat to change its course by a rudder angle calculated numerically, which means a sequence 
of zeroes and ones at a voltage of 5 V? Because it's not possible directly, a static inverter and 
an electric rudder drive are necessary, which may affect the rudder angle and the boat's 
route. If the position of the rudder is seen as actuating variable of the system, the static 
inverter, the electric rudder drive and the rudder itself from the actuator of the system. The 
actuator converts the controller output, a signal of low power, into the actuating variable, a 
signal of high power that can directly affect the plant. 

Alternatively, the output of the static inverter, that means the armature voltage of the 
rudder drive, could be seen as actuating variable. In this case, the actuator would consist 
only of static converter, whereas the rudder drive and the rudder should be added to the 
plant. These various views already show that a strict separation between the actuator and 
the process is not possible. But it is not necessary either, as for the design of the controller; 



Robust Control, Theory and Applications 



we will have to take every transfer characteristic from the controller output to the control 

variable into account anyway. Thus, we will treat the actuator as an element of the plant, 

and henceforth we will employ the actuating variable to refer to the output quantity of the 

controller. 

For the feedback of the control variable to the controller the same problem is held, this time 

only in the opposite direction: a signal of high power must be transformed into a signal of 

low power. This happens in the measuring element, which again shows dynamic properties 

that should not be overlooked. 

Caused by this feedback, a crucial problem emerges, that we will illustrate by the following 

example represented in (Fig. 3). We could formulate strategy of a boat's automatic control 

like this: the larger the deviation from the course is, the more the rudder should be steered 

in the opposite direction. At a glance, this strategy seems to be reasonable. If for some 

reason a deviation occurs, the rudder is adjusted. By steering into the opposite direction, the 

boat receives a rotatory acceleration in the direction of the desired course. 

The deviation is reduced until it disappears finally, but the rotating speed does not 

disappear with the deviation, it could only be reduced to zero by steering in the other 

direction. In this example, because of the rotating speed of the boat will receive a deviation 

in the other direction after getting back to the desired course. This is what happened after 

the rotating speed will be reduced by counter-steering caused by the new deviation. But as 

we already have a new deviation, the whole procedure starts again, only the other way 

round. The new deviation could be even greater than the first. 

The boat will begin zigzagging its way, if worst comes to worst, with always increasing 

deviations. This last case is called instability. If the amplitude of vibration remains the same, 

it is called borderline of stability. 

Only if the amplitudes decrease the system is stable. To receive an acceptable control 

algorithm for the example given, we should have taken the dynamics of the plant into 

account when designing the control strategy. 

A suitable controller would produce a counter-steering with the rudder right in time to 

reduce the rotating speed to zero at the same time the boat gets back on course. 



Desired Qourse_y^^_^^^^^_ 

Fig. 3. Automatic cruise control of a boat 

This example illustrates the requirements with respect to the controlling devices. A 
requirement is accuracy, i.e. the control error should be also small as possible once all the 
initial transients are finished and a stationary state is reached. Another requirement is the 
speed, i.e. in the case of a changing reference value or a disturbance; the control error should 
be eliminated as soon as possible. This is called the response behavior. The requirement of the 
third and most important is the stability of the whole system. We will see that these 
conditions are contradicted, of this fact of forcing each kind of controller (and therefore 
fuzzy controllers, too) to be a compromise between the three. 



Introduction to Robust Control Techniques 



3. Frequency response 

If we know a plant's transfer function, it is easy to construct a suitable controller using this 
information. If we cannot develop the transfer function by theoretical considerations, we 
could as well employ statistical methods on the basis of a sufficient quantity of values 
measured to determine it. This method requires the use of a computer, a plea which was not 
available during old time. Consequently, in these days a different method frequently 
employed in order to describe a plant's dynamic behavior, frequency response (Franklin et al., 
2002). As we shall see later, the frequency response can easily be measured. Its good 
graphical representation leads to a clear method in the design process for simple PID 
controllers. Not to mention only several criteria for the stability, which as well are employed 
in connection with fuzzy controllers, root in frequency response based characterization of a 
plant's behavior. 

The easiest way would be to define the frequency response to be the transfer function of a 
linear transfer element with purely imaginary values for s. 

Consequently, we only have to replace the complex variable s of the transfer function by a 
variable purely imaginary, jco : G(jco) = G(s)| . . The frequency response is thus a complex 
function of the parameter co . Due to the restriction of s to purely imaginary values; the 
frequency response is only part of the transfer function, but a part with the special 
properties, as the following theorem shows: 

Theorem 1 If a linear transfer element has the frequency response G(jco) , then its response to the 
input signal x(t) = asincot will be-after all initial transients have settled down-the output signal 

y(t) = a\G(ja>)\sin(cot + <p(G(ja>))) (1) 

If the following equation holds: 



]\ g (t)\dt- 



(2) 



|G(;&>)| is obviously the ratio of the output sine amplitude to the input sine amplitude 
((transmission) gain or amplification). 0(G(jco) is the phase of the complex quantity G(jco) and 
shows the delay of the output sine in relation to the input sine (phase lag). g(t) is the impulse 
response of the plant. In case the integral given in (2) does not converge, we have to add the term 
r(t) to the right hand side of (I), which will, even for t -^ oo , not vanish. 

The examination of this theorem shows clearly what kind of information about the plant the 
frequency response gives: Frequency response characterizes the system's behavior for any 
frequency of the input signal. Due to the linearity of the transfer element, the effects caused 
by single frequencies of the input signal do not interfere with each other. In this way, we are 
now able to predict the resulting effects at the system output for each single signal 
component separately, and we can finally superimpose these effects to predict the overall 
system output. 

Unlike the coefficients of a transfer function, we can measure the amplitude and phase shift 
of the frequency response directly: The plant is excited by a sinusoidal input signal of a 
certain frequency and amplitude. After all initial transients are installed we obtain a 
sinusoidal signal at the output plant, whose phase position and amplitude differ from the 
input signal. The quantities can be measured, and depending to (1), this will also instantly 



8 



Robust Control, Theory and Applications 



provide the amplitude and phase lag of the frequency response G(jco) . In this way, we can 
construct a table for different input frequencies that give the principle curve of the 
frequency response. Take of measurements for negative values of co , i.e. for negative 
frequencies, which is obviously not possible, but it is not necessary either, delay elements 
for the transfer functions rational with real coefficients and for G(jco) will be conjugate 
complex to G(-jco) . Now, knowing that the function G(jco) for co > already contains all 
the information needed, we can omit an examination of negative values of co . 

4. Tools for analysis of controls 

4.1 Nyquist plot 

A Nyquist plot is used in automatic control and signal processing for assessing the stability 
of a system with feedback. It is represented by a graph in polar coordinates in which the 
gain and phase of a frequency response are plotted. The plot of these phasor quantities 
shows the phase as the angle and the magnitude as the distance from the origin (see. Fig.4). 
The Nyquist plot is named after Harry Nyquist, a former engineer at Bell Laboratories. 



5 

I ( 

I -0.! 



OdB 






-2dB 




~~~^\ 








^ '/S 










\ 




















2dB 


'■' / -4 dB 








^ 






4dB 
















6dB 


-6dB 




N 










"10 dB 


-1,6 dB 




\ 






\ 




r n — il+ 


q> l ! 




_ J_ _ 









- - 


, 7 / I \ v 

/ I \ 
7 


%- 




J 




/ 


/ 


" 




-- \ 






z 


x 








^ ^ 




<^~- 


- "" 


^^ 











Nyquist Diagram 






40 


- /^ 


OdB 


^"^\. 


x - 


30 


" ( 






V 


20 


' \ 


/^H^^^ 




j 


w 10 
1 

1 

E -10 


" V - 


\V| dB '-2_dB 


2 ^ 


/ 








. 


-20 


" ( 


^^^ 




\ - 


-30 


- V 








-40 




^^^ 


^^^^ 


/ 



First-order system Second-order systems 

Fig. 4. Nyquist plots of linear transfer elements 

Assessment of the stability of a closed-loop negative feedback system is done by applying 
the Nyquist stability criterion to the Nyquist plot of the open-loop system (i.e. the same 
system without its feedback loop). This method is easily applicable even for systems with 
delays which may appear difficult to analyze by means of other methods. 
Nyquist Criterion: We consider a system whose open loop transfer function (OLTF) is G(s) ; 
when placed in a closed loop with feedback H(s) , the closed loop transfer function (CLTF) 

G 
then becomes The case where H = 1 is usually taken, when investigating stability, 

1 + G.H 
and then the characteristic equation, used to predict stability, becomes G + 1 = . 
We first construct The Nyquist Contour, a contour that encompasses the right-half of the 
complex plane: 

• a path traveling up the jco axis, from -700 to + 700 . 

• a semicircular arc, with radius r -^ 00 , that starts at + jco and travels clock-wise to 
-700 



Introduction to Robust Control Techniques 



The Nyquist Contour mapped through the function 1 + G(s) yields a plot of 1 + G(s) in the 
complex plane. By the Argument Principle, the number of clock- wise encirclements of the 
origin must be the number of zeros of 1 + G(s) in the right-half complex plane minus the 
poles of 1 + G(s) in the right-half complex plane. If instead, the contour is mapped through 
the open-loop transfer function G(s) , the result is the Nyquist plot of G(s) . By counting the 
resulting contour's encirclements of -1 , we find the difference between the number of poles 
and zeros in the right-half complex plane of 1 + G(s) . Recalling that the zeros of 1 + G(s) are 
the poles of the closed-loop system, and noting that the poles of 1 + G(s) are same as the 
poles of G(s) , we now state The Nyquist Criterion: 

Given a Nyquist contour r s , let P be the number of poles of G(s) encircled by r s and Z be 
the number of zeros of 1 + G(s) encircled by r s . Alternatively, and more importantly, Z is 
the number of poles of the closed loop system in the right half plane. The resultant contour 
in the G(s) -plane, 7" G , s x shall encircle (clock-wise) the point (-l + ;0) N times such 
that N = Z-P . For stability of a system, we must have Z = , i.e. the number of closed loop 
poles in the right half of the s-plane must be zero. Hence, the number of counterclockwise 
encirclements about (-1 + ;0) must be equal to P , the number of open loop poles in the right 
half plane (Faulkner, 1969), ( Franklin, 2002). 

4.2 Bode diagram 

A Bode plot is a plot of either the magnitude or the phase of a transfer function T(jco) as a 
function of co . The magnitude plot is the more common plot because it represents the gain 
of the system. Therefore, the term "Bode plot" usually refers to the magnitude plot (Thomas, 
2004), ( William, 1996), ( Willy, 2006). The rules for making Bode plots can be derived from 
the following transfer function: 



T(s) = K 



s 



V^oy 



where n is a positive integer. For +n as the exponent, the function has n zeros at s = . For 
-n, it has ft poles at s = 0. With s = jco, it follows that T(jco) = Kj ±n (co / co ) ±n , 
\T(jco)\ = Kj(co / co ) ±n and ZT(jco) = ±n x 90° . If co is increased by a factor of 10, |T(;&>)| 
changes by a factor of 10 ±n . Thus a plot of |T(;&>)| versus co on log- log scales has a slope 
of log(10 ±n ) = ±ft decades / decade . There are 20dBs in a decade , so the slope can also be 
expressed as ±20ft dB / decade . 

In order to give an example, (Fig. 5) shows the Bode diagrams of the first order and second 
order lag. Initial and final values of the phase lag courses can be seen clearly. The same 
holds for the initial values of the gain courses. Zero, the final value of these courses, lies at 
negative infinity, because of the logarithmic representation. Furthermore, for the second 
order lag the resonance magnification for smaller dampings can be see at the resonance 
frequency co . 

Even with a transfer function being given, a graphical analysis using these two diagrams 
might be clearer, and of course it can be tested more easily than, for example, a numerical 
analysis done by a computer. It will almost always be easier to estimate the effects of 
changes in the values of the parameters of the system, if we use a graphical approach 
instead of a numerical one. For this reason, today every control design software tool 
provides the possibility of computing the Nyquist plot or the Bode diagram for a given 
transfer function by merely clicking on a button. 



10 



Robust Control, Theory and Applications 



Bode Diagra 




First-order systems 

Bode Diagram 



i rrrnn i — r r rum 1 — r r rnrn 1 — r t trim r - I — rrrirr 


I I I I I I I I I I I ! ! J I I I I I I I I I I I I I I I I I I I I 


' r ^^l~ : ^^! 


i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i 




1 — rrrnrn i — n i i i i i i i u r t r n t it 1 — i — r t titit r ~ 1 — r t r i t r 

J LLL1J1IJ J I L J_ l_l _L1 _L -^^^LT 1LI1I1 I J L _L J_ I _L I _L 1_L11LI1L 


i i i i i i i i i i i i i i i i i i i\/T\M. i j i r-^Jj^^timi i i i i mi 

1 1- J- H-l 4-1 -1 \ 1 — -1— -1— 1 — 1 -1— 1 -1 4- - 1- 4- 4- UI4- l-l l — | l 4. Qnnn; -f— !— HI III 



Frequency (rad/seo) 

Second-order systems 
Fig. 5. Bode diagram of first and second-order systems 

4.3 Evans root locus 

In addition to determining the stability of the system, the root locus can be used to design 
for the damping ratio and natural frequency of a feedback system (Franklin et aL, 2002). 
Lines of constant damping ratio can be drawn radially from the origin and lines of constant 
natural frequency can be drawn as arcs whose center points coincide with the origin (see. 
Fig. 6). By selecting a point along the root locus that coincides with a desired damping ratio 
and natural frequency a gain, K, can be calculated and implemented in the controller. More 
elaborate techniques of controller design using the root locus are available in most control 
textbooks: for instance, lag, lead, PI, PD and PID controllers can be designed approximately 
with this technique. 

The definition of the damping ratio and natural frequency presumes that the overall 
feedback system is well approximated by a second order system, that is, the system has a 
dominant pair of poles. This often doesn't happen and so it's good practice to simulate the 
final design to check if the project goals are satisfied. 



Introduction to Robust Control Techniques 



11 



Root Locus 




-1 -0.5 

Real Axis 



Fig. 6. Evans root locus of a second-order system 

Suppose there is a plant (process) with a transfer function expression P(s) , and a forward 
controller with both an adjustable gainK and a transfer function expression C(s) . A unity 
feedback loop is constructed to complete this feedback system. For this system, the overall 
transfer function is given by: 



T(8) 



K.C(s).P(s) 
l + K.C(s).P(s) 



(3) 



Thus the closed-loop poles of the transfer function are the solutions to the equation 
l + K.C(s).P(s) = . The principal feature of this equation is that roots may be found 
wherever K.C.P = -1 . The variability of K , the gain for the controller, removes amplitude 
from the equation, meaning the complex valued evaluation of the polynomial in s 
C(s).P(s) needs to have net phase of 180 deg, wherever there is a closed loop pole. The 
geometrical construction adds angle contributions from the vectors extending from each of 
the poles of KC to a prospective closed loop root (pole) and subtracts the angle 
contributions from similar vectors extending from the zeros, requiring the sum be 180. The 
vector formulation arises from the fact that each polynomial term in the factored CP , (s - a) 
for example, represents the vector from a which is one of the roots, to s which is the 
prospective closed loop pole we are seeking. Thus the entire polynomial is the product of 
these terms, and according to vector mathematics the angles add (or subtract, for terms in 
the denominator) and lengths multiply (or divide). So to test a point for inclusion on the root 
locus, all you do is add the angles to all the open loop poles and zeros. Indeed a form of 
protractor, the "spirule" was once used to draw exact root loci. 

From the function T(s) , we can also see that the zeros of the open loop system ( CP ) are also 
the zeros of the closed loop system. It is important to note that the root locus only gives the 
location of closed loop poles as the gain K is varied, given the open loop transfer function. 
The zeros of a system cannot be moved. 



12 Robust Control, Theory and Applications 

Using a few basic rules, the root locus method can plot the overall shape of the path (locus) 
traversed by the roots as the value of K varies. The plot of the root locus then gives an idea 
of the stability and dynamics of this feedback system for different values of K. 

5. Ingredients for a robust control 

The design of a control consists in adjusting the transfer function of the compensator so as to 
obtain the properties and the behavior wished in closed loop. In addition to the constraint of 
stability, we look typically the best possible performance. This task is complicated by two 
principal difficulties. On the one hand, the design is carried out on a idealized model of the 
system. We must therefore ensure the robustness to imperfections in the model, i.e. to 
ensure that the desired properties for a family of systems around the reference model. On 
the other hand, it faces inherent limitations like the compromise between performances and 
robustness. 

This section shows how these objectives and constraints can be formulated and quantified in 
a consistent framework favorable to their taking into systematic account. 

5.1 Robustness to uncertainty 

The design of a control is carried out starting from a model of the real system often called 
nominal model or reference model. This model may come from the equations of physics or 
a process identification. In any case, this model is only one approximation of reality. Its 
deficiencies can be multiple: dynamic nonlinear ities neglected, uncertainty on certain 
physical parameters, assumptions simplifying, errors of measurement to the identification, 
etc.. In addition, some system parameters can vary significantly with time or operating 
conditions. Finally, from the unforeseeable external factors can come to disturb the 
operation of the control system. 

It is thus insufficient to optimize control compared to the nominal model: it is also necessary 
to be guarded against the uncertainty of modeling and external risks. Although these factors 
are poorly known, one has information in general on their maximum amplitude or their 
statistical nature. For example, the frequency of the oscillation, maximum intensity of the 
wind, or the terminals min and max on the parameter value. It is from this basic knowledge 
that one will try to carry out a robust control. 

There are two classes of uncertain factors. A first class includes the uncertainty and external 
disturbances. These are signals or actions randomness that disrupt the controlled system. 
They are identified according to their point of entry into the loop. Referring again to (Fig. 2) 
there are basically: 

• the disruption of the control w i which can come from errors of discretization or 
quantification of the control or parasitic actions on the actuators. 

• Disturbances at exit w corresponding to external effects on the output or 
unpredictable on the system, e.g. the wind for a airplane, an air pressure change for a 
chemical reactor, etc.. 

It should be noted that these external actions do not modify the dynamic behavior interns 
system, but only the "trajectory" of its outputs. 

A second class of uncertain factors joins together imperfections and variations of the 
dynamic model of the system. Recall that the robust control techniques applied to finite 
dimensional linear models, while real systems are generally non-linear and infinite 



Introduction to Robust Control Techniques 



13 



dimensional. Typically, the model used thus neglects non-linear ties and is valid only in one 
limited frequency band. It depends of more than physical parameters whose value can 
fluctuate and is often known only roughly. For practical reasons, one will distinguish: 

• the dynamic uncertainty which gathers the dynamic ones neglected in the model. 
There is usually only an upper bound on the amplitude of these dynamics. One must 
thus assume and guard oneself against worst case in the limit of this marker. 

• the parametric uncertainty or structured which is related to the variations or errors in 
estimation on certain physical parameters of the system, or with uncertainties of 
dynamic nature, but entering the loop at different points. Parametric uncertainty 
intervenes mainly when the model is obtained starting from the equations of physics. 
The way in which the parameters influential on the behavior of the system determines 
the "structure" of the uncertainty. 

5.2 Representation of the modeling uncertainty 

The dynamic uncertainty (unstructured) can encompass of physical phenomena very 
diverse (linear or nonlinear, static or time-variant, frictions, hysteresis, etc.). The techniques 
discussed in this chapter are particularly relevant when one does not have any specific 
information if not an estimate of the maximum amplitude of dynamic uncertainty, In other 
words, when uncertainty is reasonably modeled by a ball in the space of bounded operators 
of £ 2 in 1 2 . 

Such a model is of course very rough and tends to include configurations with physical 
sense. If the real system does not comprise important nonlinearities, it is often preferable to 
be restricted with a stationary purely linear model of dynamic uncertainty. We can then 
balance the degree of uncertainty according to the frequency and translate the fact that the 
system is better known into low than in high frequency. Uncertainty is then represented as a 
disturbing system LTI AG(s) which is added to the nominal model G(s) of the real system: 



G true (s) = G(s) + AG(s) 



(4) 



This system must be BIBO-stable (bounded £ 2 in £ 2 ), and it usually has an estimate of the 
maximum amplitude of AG(jco) in each frequency band. Typically, this amplitude is small at 
lower frequencies and grows rapidly in the high frequencies where the dynamics neglected 
become important. This profile is illustrated in (Fig. 7). It defines a family of systems whose 
envelope on the Nyquist diagram is shown in (Fig. 8) (case SISO). The radius of the disk of 
the frequency uncertainty co is |zlG(;&>)| . 



IAGOH 



co 



Fig. 7. Standard profile for |zlG(;&>)| . 



14 



Robust Control, Theory and Applications 



Im(G(j©)) 




Re(GC/«D)) 



Fig. 8. Family of systems 

The information on the amplitude |zlG(;&>)| of the uncertainty can be quantified in several 

ways: 

• additive uncertainty: the real system is of the form: 



G tme (s) = G(s) + A(s) 
Where A(s) is a stable transfer function satisfying: 

||W,(a>)2l(7'a>)W r (a>)|| <1 



(5) 



(6) 



for certain models W^s) and W r (s) . These weighting matrices make it possible to 
incorporate information on the frequential dependence and directional of the maximum 
amplitude of A(s) (see singular values). 
• multiplicative uncertainty at the input: the real system is of the form: 



G true (s) = G(s).(I + A(s)) 



(7) 



where A(s) is like above. This representation models errors or fluctuations on the behavior 

in input. 

• multiplicative uncertainty at output: the real system is of the form: 



G tme (s) = (I + A(s)).G(s) 



(8) 



This representation is adapted to modeling of the errors or fluctuations in the output behavior. 
According to the data on the imperfections of the model, one will choose one or the other of 
these representations. Let us note that multiplicative uncertainty has a relative character. 



5.3 Robust stability 

Let the linear system be given by the transfer function 



G(s) 



_ °m S + °m-l S 



a n s + a n _\S +... + a a s + aQ 



n( s ». 



T 

-e" =V+ 



ri(- +i ) 



where m < n 



(9) 



v=l v=l rv 



Introduction to Robust Control Techniques 1 5 



with the gain 



V = ^- (10) 



First we must explain what we mean by stability of a system. Several possibilities exist to 

define the term, two of which we will discuss now. A third definition by the Russian 

mathematician Lyapunov will be presented later. The first definition is based on the step 

response of the system: 

Definition 1 A system is said to be stable if,fort->co, its step response converges to a finite value. 

Otherwise, it is said to be instable. 

This unit step function has been chosen to stimulate the system does not cause any 

restrictions, because if the height of the step is modified by the factor k, the values to the 

system output will change by the same factor k, too, according to the linearity of the system. 

Convergence towards a finite value is therefore preserved. 

A motivation for this definition can be the idea of following illustration: If a system 

converges towards a finished value after strong stimulation that a step in the input signal 

represents, it can suppose that it will not be wedged in permanent oscillations for other 

kinds of stimulations. 

It is obvious to note that according to this definition the first order and second order lag is 

stable, and that the integrator is instable. 

Another definition is attentive to the possibility that the input quantity may be subject to 

permanent changes: 

Definition 2 A linear system is called stable if for an input signal with limited amplitude, its output 

signal will also show a limited amplitude. This is the BIB O -Stability (bounded input - bounded 

output). 

Immediately, the question on the connection between the two definitions arises, that we will 

now examine briefly. The starting point of discussion is the convolution integral, which 

gives the relationship between the system's input and the output quantity (the impulse 

response): 

t t 

¥(*) = j g(t~ T)x(r)dr = j* g(r)x(t - r)dr (11) 

x(t) is bounded if and only if \x(t)\ < k holds (with k> ) for all t . This implies: 

|y(')|* \\g(T)\\x(t-T)\dr<k\ g(T)dr (12) 

Now, with absolute convergence of the integral of the impulse response, 



Jl*(* 



\dr = c<oo (13) 



y(s) will be limited by kc , also, and thus the whole system will be BIBO-stable. Similarly it 
can be shown that the integral (13) converges absolutely for all BIBO-stable systems. BIBO 



1 6 Robust Control, Theory and Applications 

stability and the absolute convergence of the impulse response integral are the equivalent 

properties of system. 

Now we must find the conditions under which the system will be stable in the sense of a 

finite step response (Definition 2): Regarding the step response of a system in the frequency 

domain, 

y(s) = G(s)i (14) 

S 

If we interpret the factor y as an integration (instead of the Laplace transform of the step 
signal), we obtain 

y(s)= J g(r)dT (15) 

in the time domain for y(0) = 0.y(t) converge to a finite value only if the integral converges: 

= C < 00 (16) 



J g(T)dr 



Convergence is obviously a weaker criterion than absolute convergence. Therefore, each 
BIBO-stable system will have a finite step response. To treat the stability always in the sense 
of the BIBO-stability is tempting because this stronger definition makes other 
differentiations useless. On the other hand, we can simplify the following considerations 
much if we use the finite-step-response-based definition of stability (Christopher, 
2005), (Arnold, 2006). In addition to this, the two definitions are equivalent as regards the 
transfer functions anyway. Consequently, henceforth we will think of stability as 
characterized in (Definition 2). 

Sometimes stability is also defined while requiring that the impulse response to converge 
towards zero for t — » oo . A glance at the integral (16) shows that this criterion is necessary 
but not sufficient condition for stability as defined by (Definition 2), while (Definition 2) is the 
stronger definition. If we can prove a finite step response, then the impulse response will 
certainly converge to zero. 

5.3.1 Stability of a transfer function 

If we want to avoid having to explicitly calculate the step response of a system in order to 

prove its stability, then a direct examination of the transfer function of the system's, trying to 

determine criteria for the stability, seems to suggest itself ( Levine, 1996). This is relatively 

easy concerning all ideas that we developed up to now about the step response of a rational 

transfer function. The following theorem is valid: 

Theorem 2 A transfer element with a rational transfer function is stable in the sense of (Definition 2) 

if and only if all poles of the transfer function have a negative real part. 

According to equation (17), the step response of a rational transfer element is given by: 

y(*) = ZM0« 8 * (17) 



Introduction to Robust Control Techniques 



17 



For each pole s A of multiplicity n A , we obtain a corresponding operand h A (t)e A * , which 
h A (t) is a polynomial of degree n x - 1 . For a pole with a negative real part, this summand 
disappears to increase t, as the exponential function converges more quickly towards 
zero than the polynomial h x (t) can increase. If all the poles of the transfer function have a 
negative real part, then all corresponding terms disappear. Only the summand h { (t)e [i for 
the simple pole s i = remains, due to the step function. The polynomial h { (t) is of degree 
n { - 1 = , i.e. a constant, and the exponential function is also reduced to a constant. In this 
way, this summand form the finite final value of the step function, and the system is 
stable. 

We omit the proof in the opposite direction, i.e. a system is instable if at least one pole has a 
positive real part because it would not lead to further insights. It is interesting that (Theorem 
2) holds as well for systems with delay according to (9). The proof of this last statement will 
be also omitted. 

Generally, the form of the initial transients as reaction to the excitations of outside will also 
be of interest besides that the fact of stability. If a plant has, among others, a complex 

conjugate pair of poles s x , s x , the ratio |Re(s A )| ^Re(s A ) 2 +Im(s /l ) 2 is equal to the damping 

ratio D and therefore responsible for the form of the initial transient corresponding to this 
pair of poles. In practical applications one will therefore pay attention not only to that the 
system's poles have a negative real part, but also to the damping ratio D having a 
sufficiently high value, i.e. that a complex conjugate pair of poles lies at a reasonable 
distance to the axis of imaginaries. 

5.3.2 Stability of a control loop 

The system whose stability must be determined will in the majority of the cases be a closed 
control loop (Goodwin, 2001), as shown in (Fig. 2). A simplified structure is given in (Fig. 9). 
Let the transfer function of the control unit is K(s) , the plant will be given by G(s) and the 
metering element by M(s) . To keep further derivations simple, we set M(s) to 1, i.e. we 
neglect the dynamic behavior of the metering element, for simple cases, but it should 
normally be no problem to take the metering element also into consideration. 



CO 



•9 



K-^-> 



-► K 



M <*- 



y 



Fig. 9. Closed-loop system 

We summarize the disturbances that could affect the closed loop system to virtually any 
point, into a single disturbance load that we impressed at the plant input. This step 
simplifies the theory without the situation for the controller easier than it would be in 
practical applications. Choose the plant input as the point where the disturbance affects the 
plant is most unfavorable: The disturbance can affect plants and no countermeasure can be 
applied, as the controller can only counteract after the changes at the system output. 



18 Robust Control, Theory and Applications 

To be able to apply the criteria of stability to this system we must first calculate the transfer 
function that describes the transfer characteristic of the entire system between the input 
quantity co and the output quantity y . This is the transfer function of the closed loop, which 
is sometimes called the reference (signal) transfer function. To calculate it, we first set d to 
zero. In the frequency domain we get 

y(s) = G(s)u(s) = G(s)K(s)(a>(s) - y(s)) (18) 

ty x y( s ) G ( S ) K ( S ) 

T(s) = *±± = w w H9) 

w a)(s) G(s)K(s) + l { } 

In a similar way, we can calculate a disturbance transfer function, which describes the transfer 
characteristic between the disturbance d and the output quantity y: 

c ^ V( s ) G ( S ) K ( S ) 

S(s) = ^-^ = — — (20) 

w d(s) G(s)K(s) + l { } 

The term G(s)K(s) has a special meaning: if we remove the feedback loop, so this term 

represents the transfer function of the resulting open circuit. Consequently, G(s)K(s) is 

sometimes called the open-loop transfer function. The gain of this function (see (9)) is called 

open-loop gain. 

We can see that the reference transfer function and the disturbance transfer function have 

the same denominator G(s)K(s) + 1 . On the other hand, by (Theorem 2), it is the denominator 

of the transfer function that determines the stability. It follows that only the open-loop 

transfer function affects the stability of a system, but not the point of application of an input 

quantity. We can therefore restrict an analysis of the stability to a consideration of the 

termG(s)K(s) + l. 

However, since both the numerator and denominator of the two transfer functions T(s) and 

S(s) are obviously relatively prime to each other, the zeros of G(s)K(s) + l are the poles of 

these functions, and as a direct consequence of (Theorem 2) we can state: 

Theorem 3 A closed-loop system with the open-loop transfer function G(s)K(s) is stable if and only if 

all solutions of the characteristic equation have a negative real part. 

G(s)K(s) + 1 = (21) 

Computing these zeros in an analytic way will no longer be possible if the degree of the 
plant is greater than two, or if an exponential function forms a part of the open-loop transfer 
function. Exact positions of the zeros, though, are not necessary in the analysis of stability. 
Only the fact whether the solutions have a positive or negative real part is of importance. 
For this reason, in the history of the control theory criteria of stability have been developed 
that could be used to determine precisely without having to make complicated calculations 
(Christopher, 2005), ( Franklin, 2002). 

5.3.3 Lyapunov's stability theorem 

We state below a variant of Lyapunov's direct method that establishes global asymptotic 
stability. 



Introduction to Robust Control Techniques 



19 



Theorem 4 Consider the dynamical system x(t) = f(x(t)) and let x = 0be its unique 
equilibrium point. If there exists a continuously differentiable function V : 9? n — » 9? such that 



V(0) = 



(22) 



V(x)yO Vx^O 



(23) 



> y(x) -» oo 



(24) 



V(x)^0 Vx*0, 



(25) 



then x = is globally asymptotically stable. 

Condition (25) is what we refer to as the monotonicity requirement of Lyapunov's theorem. In 

the condition, V(x) denotes the derivation of V(x) along the trajectories of x(t) and is given 

by 



V(x) 



dV(x) .. . 



where <.,.> denotes the standard inner product in 9? n and 



dV(x) 

dx 



g ^H n is the gradient of 



V(x) . As far as the first two conditions are concerned, it is only needed to assume that V(x) 
is lower bounded and achieves its global minimum at x = .There is no conservatism, 
however, in requiring (22) and (23). A function satisfying condition (24) is called radially 
unbounded. We refer the reader to (Khalil, 1992) for a formal proof of this theorem and for an 
example that shows condition (24) cannot be removed. Here, we give the geometric intuition 
of Lyapunov's theorem, which essentially carries all of the ideas behind the proof. 




Fig. 10. Geometric interpretation of Lyapunov's theorem. 

(Fig. 10) shows a hypothetical dynamical system in $R 2 . The trajectory is moving in the 
(x 1/ x 2 ) plane but we have no knowledge of where the trajectory is as a function of time. On 
the other hand, we have a scalar valued function V(x) , plotted on the z-axis, which has the 



20 Robust Control, Theory and Applications 

guaranteed property that as the trajectory moves the value of this function along the 
trajectories strictly decreases. Since V(x(t)) is lower bounded by zero and is strictly 
decreasing, it must converge to a nonnegative limit as time goes to infinity. It takes a 
relatively straightforward argument appealing to continuity of V(x) and V(x) ) to show that 
the limit of V(x(t)) cannot be strictly positive and indeed conditions (22)-(25) imply 

V(x(t)) -> as t -> oo 

Since x = is the only point in space where V(x) vanishes, we can conclude that x(t) goes to 
the origin as time goes to infinity. 

It is also insightful to think about the geometry in the (x 1 ,x 2 ) plane. The level sets of V(x) are 
plotted in (Fig. 10) with dashed lines. Since V(x(t )) decreases monotonically along trajectories, 
we can conclude that once a trajectory enters one of the level sets, say given by V (x) = c , it 
can never leave the set q •= l x e 9? w \V < c) -This property is known as invariance of sub-level 

sets. 

Once again we emphasize that the significance of Lyapunov' s theorem is that it allows 
stability of the system to be verified without explicitly solving the differential equation. 
Lyapunov' s theorem, in effect, turns the question of determining stability into a search for a 
so-called Lyapunov function, a positive definite function of the state that decreases 
monotonically along trajectories. There are two natural questions that immediately arise. 
First, do we even know that Lyapunov functions always exist? 

Second, if they do in fact exist, how would one go about finding one? In many situations, the 
answer to the first question is positive. The type of theorems that prove existence of Lyapunov 
functions for every stable system are called converse theorems. One of the well known 
converse theorems is a theorem due to Kurzweil that states if / in (Theorem 4) is continuous 
and the origin is globally asymptotically stable, then there exists an infinitely differentiable 
Lyapunov function satisfying conditions of (Theorem 4). We refer the reader to (Khalil, 1992) 
and (Bacciotti & Rosier,2005) for more details on converse theorems. Unfortunately, converse 
theorems are often proven by assuming knowledge of the solutions of (Theorem 4) and are 
therefore useless in practice. By this we mean that they offer no systematic way of finding 
the Lyapunov function. Moreover, little is known about the connection of the dynamics /to 
the Lyapunov function V. Among the few results in this direction, the case of linear systems 
is well settled since a stable linear system always admits a quadratic Lyapunov function. It 
is also known that stable and smooth homogeneous systems always have a homogeneous 
Lyapunov function (Rosier, 1992). 

5.3.4 Criterion of Cremer, Leonhard and Michailow 

Initially let us discuss a criterion which was developed independently by Cremer , Leonhard 
and Michailov during the years 1938-1947. The focus of interest is the phase shift of the 
Nyquist plot of a polynomial with respect to the zeros of the polynomial (Mansour, 1992). 
Consider a polynomial of the form 

n 

P(s) = s n + a n _ x s n ~ x + ... + a x s + a = Y[(s - s v ) (26) 



Introduction to Robust Control Techniques 



21 



be given. Setting s = jco and substituting we obtain 

njco) = f[{jco-s v ) = fl{\jm-s v \)e^ 

« ;2>,.(») . 

=ni^- s vi e ^ =i p o' 



(27) 



j<p(co) 



We can see, that the frequency response P(jco) is the product of the vectors (jco - s v ) , where 
the phase <£>(&>) is given by the sum of the angles (p v (co) of those vectors. (Fig.ll) shows the 
situation corresponding to a pair of complex conjugated zeros with negative real part and 
one zero with a positive real part. 



(yco-^) 




*3 Re 



Fig. 11. Illustration to the Cremer-Leonhard-Michailow criterion 

If the parameter co traverses the interval (-00,00), it causes the end point of the vectors 

(jo)-s v ) to move along the axis of imaginaries in positive direction. For zeros with negative 

real part, the corresponding angle (p v traverses the interval from — to +— , for zeros with 

2 2 

positive real part the interval from -\ — — to +— . For zeros lying on the axis of imaginaries 

2 2 

.— and switches to the value +— 

2 2 



the corresponding angle cp v initially has the value 

at jco = s v . 

We will now analyze the phase of frequency response, i.e. the entire course which the angle 
(p(co) takes. This angle is just the sum of the angles (pu v (co) . Consequently, each zero with a 
negative real part contributes an angle of +n to the phase shift of the frequency response, 
and each zero with a positive real part of the angle -n . Nothing can be said about zeros 
located on the imaginary axis because of the discontinuous course where the values of the 
phase to take. But we can immediately decide zeros or not there watching the Nyquist plot 
of the polynomial P(s) . If she got a zero purely imaginary s = s v , the corresponding Nyquist 
plot should pass through the origin to the frequency &> = |s v | . This leads to the following 
theorem: 



22 



Robust Control, Theory and Applications 



Theorem 5 A polynomial P(s) of degree n with real coefficients will have only zeros with negative 
real part if and only if the corresponding Nyquist plot does not pass through the origin of the complex 
plane and the phase shift A<p of the frequency response is equal to nn for -go < co < +00 . If a 

traverses the interval < co< +00 only, then the phase shift needed will he equal to V_ n . 

2 

We can easily prove the fact that for < co< +00 the phase shift needed is only — n —only 

2 
half the value: 

For zeros lying on the axis of reals, it is obvious that their contribution to the phase shift will 
be only half as much if co traverses only half of the axis of imaginaries (from to 00 ). The 
zeros with an imaginary part different from zero are more interesting. Because of the 
polynomial's real-valued coefficients, they can only_appear as a pair of complex conjugated 
zeros. (Fig. 12) shows such a pair with s 1 = s 2 and a x = -a 2 . For -00 <co< +00 the 
contribution to the phase shift by this pair is 2n . For < co< +00 , the contribution of s 1 

is — + \a J and the one for s 2 is Ul. Therefore, the overall contribution of this pair of 

2 ' ' 2 ' ' 

poles is n , so also for this case the phase shift is reduced by one half if only the half axis of 
imaginaries is taken into consideration. 




Fig. 12. Illustration to the phase shift for a complex conjugated pair of poles 



6. Beyond this introduction 

There are many good textbooks on Classical Robust Control. Two popular examples are 
(Dorf & Bishop, 2004) and (Franklin et al., 2002). A less typical and interesting alternative is 
the recent textbook (Goodwin et al., 2000). All three of these books have at least one chapter 
devoted to the Fundamentals of Control Theory. Textbooks devoted to Robust and Optimal 
Control are less common, but there are some available. The best known is probably (Zhou et 
al.1995). Other possibilities are (Astrom & Wittenmark, 1996), (Robert, 1994)( Joseph et al, 
2004). An excellent book about the Theory and Design of Classical Control is the one by 
Astrom and Hagglund (Astrom & Hagglund, 1995). Good references on the limitations of 
control are (Looze & Freudenberg, 1988). Bode r s book (Bode, 1975) is still interesting, 
although the emphasis is on vacuum tube circuits. 



Introduction to Robust Control Techniques 23 

7. References 

Astrom, K. J. & Hagglund, T. (1995). PID Controllers: Theory, Design and Tuning, International 

Society for Measurement and Control, Seattle, WA, , 343p, 2nd edition. ISBN: 

1556175167. 
Astrom, K. J. & Wittenmark, B. (1996). Computer Controlled Systems, Prentice-Hall, 

Englewood Cliffs, NJ, 555p, 3rd edition. ISBN-10: 0133148998. 
Arnold Zankl (2006). Milestones in Automation: From the Transistor to the Digital Factory, 

Wiley-VCH, ISBN 3-89578-259-9. 
Bacciotti, A. & Rosier, L. (2005). Liapunov functions and stability in control theory, Springer, 238 

p, ISBN:3540213325. 
Bode, H. W. (1975). Network Analysis and Feedback Amplifier Design, R. E. Krieger Pub. Co., 

Huntington, NY. Publisher: R. E. Krieger Pub. Co; 577p, 14th print. ISBN 

0882752421. 
Christopher Kilian (2005). Modern Control Technology. Thompson Delmar Learning, ISBN 1- 

4018-5806-6. 
Dorf, R. C. & Bishop, R. H. (2005). Modern Control Systems, Prentice-Hall, Upper Saddle 

River, NJ, 10* edition. ISBN 0131277650. 
Faulkner, E.A. (1969): Introduction to the Theory of Linear Systems, Chapman & Hall; ISBN 0- 

412-09400-2. 
Franklin, G. F.; Powell, J. D. & Emami-Naeini, A. (2002). Feedback Control of Dynamical 

Systems, Prentice-Hall, Upper Saddle River, NJ, 912p, 4th edition. ISBN: 0-13- 

032393-4. 
Joseph L. Hellerstein; Dawn M. Tilbury, & Sujay Parekh (2004). Feedback Control of Computing 

Systems, John Wiley and Sons. ISBN 978-0-471-26637-2. 
Boukhetala, D.; Halbaoui, K. and Boudjema, F.(2006). Design and Implementation of a Self 

tuning Adaptive Controller for Induction Motor Drives. International Review of 

Electrical Engineering, 260-269, ISSN: 1827- 6660. 
Goodwin, G. C, Graebe, S. F. & Salgado, M. E. (2000). Control System Design, Prentice-Hall, 

Upper Saddle River, NJ.,908p, ISBN: 0139586539. 
Goodwin, Graham .(2001). Control System Design, Prentice Hall. ISBN 0-13-958653-9. 
Khalil, H. (2002). Nonlinear systems, Prentice Hall, New Jersey, 3rd edition. ISBN 0130673897. 
Looze, D. P & Freudenberg, J. S. (1988). Frequency Domain Properties of Scalar and 

Multivariable Feedback Systems, Springer- Verlag, Berlin. , 281p, ISBN:038718869. 
Levine, William S., ed (1996). The Control Handbook, New York: CRC Press. ISBN 978-0-849- 

38570-4. 
Mansour, M. (1992). The principle of the argument and its application to the stability and robust 

stability problems, Springer Berlin - Heidelberg, Vo 183, 16-23, ISSN 0170-8643, ISBN 

978-3-540-55961-0. 
Robert F. Stengel (1994). Optimal Control and Estimation, Dover Publications. ISBN 0-486- 

68200-5. 
Rosier, L. (1992). Homogeneous Lyapunov function for homogeneous continuous vector field, 

Systems Control Lett, 19(6):467-473. ISSN 0167-6911 
Thomas H. Lee (2004). The design of CMOS radio- frequency integrated circuits, (Second Edition 

ed.). Cambridge UK: Cambridge University Press, p. §14.6 pp. 451-453.ISBN 0-521- 

83539-9. 



24 Robust Control, Theory and Applications 

Zhou, K., Doyle J. C, & Glover, K.. (1995). Robust and Optimal Control, Prentice-Hall, Upper 

Saddle River, NJ., 596p, ISBN: 0134565673. 
William S Levine (1996). The control handbook: the electrical engineering handbook series, (Second 

Edition ed.). Boca Raton FL: CRC Press/IEEE Press, p. §10.1 p. 163. ISBN 

0849385709. 
Willy M C Sansen (2006) .Analog design essentials, Dordrecht, The Netherlands: Springer. 

p. §0517-§0527 pp. 157-163.ISBN 0-387-25746-2. 



Robust Control of Hybrid Systems 

Khaled Halbaoui 1 ' 2 , Djamel Boukhetala 2 and Fares Boudjema 2 

tPower Electronics Laboratory, Nuclear Research Centre of Brine CRNB, BP 180 Ain 

oussera 17200, Djelfa, 
2 Laboratoire de Commande des Processus, ENSP, 10 avenue Pasteur, Hassan Badi, BP 182 

El-Harrach, 
Algeria 

1. Introduction 

The term "hybrid systems" was first used in 1966 Witsenhausen introduced a hybrid model 
consisting of continuous dynamics with a few sets of transition. These systems provide both 
continuous and discrete dynamics have proven to be a useful mathematical model for 
various physical phenomena and engineering systems. A typical example is a chemical 
batch plant where a computer is used to monitor complex sequences of chemical reactions, 
each of which is modeled as a continuous process. In addition to the discontinuities 
introduced by the computer, most physical processes admit components (eg switches) and 
phenomena (eg collision), the most useful models are discrete. The hybrid system models 
arise in many applications, such as chemical process control, avionics, robotics, automobiles, 
manufacturing, and more recently molecular biology. 

The control design for hybrid systems is generally complex and difficult. In literature, 
different design approaches are presented for different classes of hybrid systems, and 
different control objectives. For example, when the control objective is concerned with issues 
such as safety specification, verification and access, the ideas in discrete event control and 
automaton framework are used for the synthesis of control. 

One of the most important control objectives is the problem of stabilization. Stability in the 
continuous systems or not-hybrid can be concluded starting from the characteristics from 
their fields from vectors. However, in the hybrid systems the properties of stability also 
depend on the rules of commutation. For example, in a hybrid system by commutation 
between two dynamic stable it is possible to obtain instabilities while the change between 
two unstable subsystems could have like consequence stability. The majority of the results 
of stability for the hybrid systems are extensions of the theories of Lyapunov developed for 
the continuous systems. They require the Lyapunov function at consecutive switching times 
to be a decreasing sequence. Such a requirement in general is difficult to check without 
calculating the solution of the hybrid dynamics, and thus losing the advantage of the 
approach of Lyapunov. 

In this chapter, we develop tools for the systematic analysis and robust design of hybrid 
systems, with emphasis on systems that require control algorithms, that is, hybrid control 
systems. To this end, we identify mild conditions that hybrid equations need to satisfy so 
that their behavior captures the effect of arbitrarily small perturbations. This leads to new 
concepts of global solutions that provide a deep understanding not only on the robustness 



26 



Robust Control, Theory and Applications 



properties of hybrid systems, but also on the structural properties of their solutions. 
Alternatively, these conditions allow us to produce various tools for hybrid systems that 
resemble those in the stability theory of classical dynamical systems. These include general 
versions of theorems of Lyapunov stability and the principles of invariance of LaSalle. 



2. Hybrid systems: Definition and examples 

Different models of hybrid systems have been proposed in the literature. They mainly differ 
in the way either the continuous part or the discrete part of the dynamics is emphasized, 
which depends on the type of systems and problems we consider. A general and commonly 
used model of hybrid systems is the hybrid automaton (see e.g. (Dang, 2000) and (Girard, 
2006)). It is basically a finite state machine where each state is associated to a continuous 
system. In this model, the continuous evolutions and the discrete behaviors can be 
considered of equal complexity and importance. By combining the definition of the 
continuous system, and discrete event systems hybrid dynamical systems can be defined: 
Definition 1 A hybrid system H is a collection H := (Q,X,Z ,U,F ,R) , where 

Q is a finite set, called the set of discrete states; 

X cz 9? n is the set of continuous states; 

I is a set of discrete input events or symbols; 

X cz $i m is the set of continuous inputs; 

F : Q x X x LI — » $i n is a vector field describing the continuous dynamics; 

R:QxXxi7xlI^QxX describes the discrete dynamics. 



off[ 



3 

Time 



Fig. 1. A trajectory of the room temperature. 

Example 1 (Thermostat). The thermostat consists of a heater and a thermometer which 
maintain the temperature of the room in some desired temperature range (Rajeev, 1993). The 
lower and upper thresholds of the thermostat system are set at x m and x M such that 
x m <x M . The heater is maintained on as long as the room temperature is below x M , and it 
is turned off whenever the thermometer detects that the temperature reaches x M . Similarly, 
the heater remains off if the temperature is above x m and is switched on whenever the 



Robust Control of Hybrid Systems 



27 



temperature falls to x m (Fig. 1). In practical situations, exact threshold detection is 
impossible due to sensor imprecision. Also, the reaction time of the on/ off switch is usually 
non-zero. The effect of these inaccuracies is that we cannot guarantee switching exactly at 
the nominal values x m and x M . As we will see, this causes non-determinism in the discrete 
evolution of the temperature. 

Formally we can model the thermostat as a hybrid automaton shown in (Fig. 2). The two 
operation modes of the thermostat are represented by two locations W and 'off. The on/off 
switch is modeled by two discrete transitions between the locations. The continuous 
variable x models the temperature, which evolves according to the following equations. 




Fig. 2. Model of the thermostat. 

• If the thermostat is on, the evolution of the temperature is described by: 

x = f x (x, u) = -x + 4 + u 



a) 



When the thermostat is off, the temperature evolves according to the following 
differential equation: 

x = f 2 (x,u) = -x + u 








xM 




xM-e 

xO 
xm+e 


:-"-: \ : 








xm e 





Fig. 3. Two different behaviors of the temperature starting at x . 

The second source of non-determinism comes from the continuous dynamics. The input 
signal u of the thermostat models the fluctuations in the outside temperature which we 
cannot control. (Fig. 3 left) shows this continuous non-determinism. Starting from the initial 
temperature x , the system can generate a "tube" of infinite number of possible trajectories, 
each of which corresponds to a different input signal u . To capture uncertainty of sensors, 
we define the first guard condition of the transition from W to 'off as an interval 
[x M -e,x M +s] with s y . This means that when the temperature enters this interval, the 
thermostat can either turn the heater off immediately or keep it on for some time provided 



28 



Robust Control, Theory and Applications 



that x < x M + s . (Fig. 3 right) illustrates this kind of non-determinism. Likewise, we define 
the second guard condition of the transition from ] off to W as the interval \x m -£,x m +s] . 
Notice that in the thermostat model, the temperature does not change at the switching 
points, and the reset maps are thus the identity functions. 

Finally we define the two staying conditions of the W and ] off locations as x< x M + s and 
x > x M - s respectively, meaning that the system can stay at a location while the 
corresponding staying conditions are satisfied. 

Example 2 (Bouncing Ball). Here, the ball (thought of as a point-mass) is dropped from an 
initial height and bounces off the ground, dissipating its energy with each bounce. The ball 
exhibits continuous dynamics between each bounce; however, as the ball impacts the 
ground, its velocity undergoes a discrete change modeled after an inelastic collision. A 
mathematical description of the bouncing ball follows. Let x x := h be the height of the ball 
and x 2 '= h (Fig. 4). A hybrid system describing the ball is as follows: 



£«:= 





-y.x 2 



,D: 



= 0, x 2 -< 0} f{x 



,C-={x:x 1 >0}\D . 



(2) 



This model generates the sequence of hybrid arcs shown in (Fig. 5). However, it does not 
generate the hybrid arc to which this sequence of solutions converges since the origin does 
not belong to the jump set D . This situation can be remedied by including the origin in the 
jump set D . This amounts to replacing the jump set D by its closure. One can also replace 
the flow set C by its closure, although this has no effect on the solutions. 
It turns out that whenever the flow set and jump set are closed, the solutions of the corresponding 
hybrid system enjoy a useful compactness property: every locally eventually bounded sequence of 
solutions has a subsequence converging to a solution. 



h = 0&h~<0? 







!!!!!! 






[ i [ \ [ J 


A - 

h 


;;;;•; 


4« 


































Fig. 4. Diagram for the bouncing ball system 



















h 






\ 




K 








h 


\ 










-^ 


\ 







-h r 




\ 






\ 








\ 




N ^ 






\ 




\ 




N 










\ 

















Fig. 5. Solutions to the bouncing ball system 

Consider the sequence of hybrid arcs depicted in (Fig. 5). They are solutions of a hybrid 
"bouncing ball" model showing the position of the ball when dropped for successively 



Robust Control of Hybrid Systems 29 

lower heights, each time with zero velocity. The sequence of graphs created by these hybrid 
arcs converges to a graph of a hybrid arc with hybrid time domain given by 
{0} x fnonnegative integers/ where the value of the arc is zero everywhere on its domain. If 
this hybrid arc is a solution then the hybrid system is said to have a "compactness" 
property. This attribute for the solutions of hybrid systems is critical for robustness 
properties. It is the hybrid generalization of a property that automatically holds for 
continuous differential equations and difference equations, where nominal robustness of 
asymptotic stability is guaranteed. 

Solutions of hybrid systems are hybrid arcs that are generated in the following way: Let C 
and D be subsets of 9? n and let / , respectively g , be mappings from C , respectively D , 
to yi n . The hybrid system H := (f,g,C,D) can be written in the form 

x = f(x) xeC 

(3) 
x + = g(x) xgD w 

The map /is called the "flow map", the map g is called the "jump map", the set C is called 
the "flow set", and the set D is called the "jump set". The state x may contain variables 
taking values in a discrete set (logic variables), timers, etc. Consistent with such a situation is 
the possibility that C U D is a strict subset of 9? n . For simplicity, assume that / and g are 
continuous functions. At times it is useful to allow these functions to be set-valued 
mappings, which will denote by F and G , in which case F and G should have a closed 
graph and be locally bounded, and F should have convex values. 
In this case, we will write 



x eF x eC 

x + e G x e D 



(4) 



A solution to the hybrid system (4) starting at a point x e C u D is a hybrid arc x with the 
following properties: 

1. x(0,0) = x ; 

2. given (s,j) e dom x , if there exists x >- s such that (x,;) e dom x , then, for all t e [s,x] , 
x{t,]) g C and, for almost all t e [s,x] , x(t,j) e F(x(t,j)) ; 

3. given (i, f) s dom x , if (t,j + 1) e dom x then x(t,j) e D and x(t,j + 1) e G(x(t,j)) . 
Solutions from a given initial condition are not necessarily unique, even if the flow map is a 
smooth function. 

3. Approaches to analysis and design of hybrid control systems 

The analysis and design tools for hybrid systems in this section are in the form of Lyapunov 
stability theorems and LaSalle-like invariance principles. Systematic tools of this type are the 
base of the theory of systems for purely of the continuous-time and discrete-time systems. 
Some similar tools available for hybrid systems in (Michel, 1999) and (DeCarlo, 2000), the 
tools presented in this section generalize their conventional versions of continuous-time and 
discrete-time hybrid systems development by defining an equivalent concept of stability 
and provide extensions intuitive sufficient conditions of stability asymptotically. 



30 Robust Control, Theory and Applications 

3.1 LaSalle-like invariance principles 

Certain principles of invariance for the hybrid systems have been published in (Lygeros et 
al, 2003) and (Chellaboina et al., 2002). Both results require, among other things, unique 
solutions which is not generic for hybrid control systems. In (Sanfelice et al., 2005), the 
general invariance principles were established that do not require uniqueness. The work in 
(Sanfelice et al., 2005) contains several invariance results, some involving integrals of 
functions, as for systems of continuous-time in (Byrnes & Martin, 1995) or (Ryan, 1998), and 
some involving nonincreasing energy functions, as in work of LaSalle (LaSalle, 1967) or 
(LaSalle, 1976). Such a result will be described here. 
Suppose we can find a continuously differentiable function V : 9? n -> 9? such that 



u c (x) := (W(x),f(x)) < VigC 

u d (x) := V(g(x)) - V(x) < VigD 



(5) 



Consider x(-,-) a bounded solution with an unbounded hybrid time. Then there exists a value r in the 
range V so that x tends to the largest weakly invariant set inside the set 

M r := V-\r)n(u-\0)[){ui(p) g (uf(Q)j)) (6) 

where w^(0) : the set of points x satisfying u d (x) = and g(w^ a (0)) corresponds to the set of 
points g(y) where y e uf(0) . 

The naive combination of continuous-time and discrete-time results would omit the 
intersection withg(w^ a (0)) . This term, however, can be very useful for zeroing in set to 
which trajectories converge. 

3.2 Lyapunov stability theorems 

Some preliminary results on the existence of the non-smooth Lyapunov function for the hybrid 
systems published in (DeCarlo, 2000). The first results on the existence of smooth Lyapunov 
functions, which are closely related to the robustness, published in (Cai et al., 2005). These 
results required open basins of attraction, but this requirement has since been relaxed in (Cai et 
al. 2007). The simplified discussion here is borrowed from this posterior work. 
Let be an open subset of the state space containing a given compact set A and let 
co : — » 9?> be a continuous function which is zero for all x e A , is positive otherwise, 
which grows without limit as its argument grows without limit or near the limit . Such a 
function is called a suitable indicator for the compact set A in the open set . An example of 
such a function is the standard function on 9? n which is an appropriate indicator of origin. 
More generally, the distance to a compact set A is an appropriate indicator for all A on 9? n . 
Given an open setc7, an appropriate indicator co and hybrid data(/,g,C,D) , a function 
V : — > 9?> is called a smooth Lyapunov function for (/ ,g,C,D,a>,G) if it is smooth and 
there exist functions a a ,a 2 belonging to the class- %^ , such as 

a a (co(x)) < V(x) < a 2 (co(x)) Vx e 

(VV(x),f(x))<-V(x) \/xeCf]0 (7) 

V(g(x)) < e~ x V (x) \/xeD{]0 

Suppose that such a function exists, it is easy to verify that all solutions for the hybrid 
system (f,g,C,D) from fl (c U D\ satisfied 



Robust Control of Hybrid Systems 31 

(o(x(t f j)) < 04 1 (e~ H a 2 (co(x(0,0)))) \/(t,j) e dom x (8) 

In particular, 

• (pre-stability of A ) for each e ^ there exists 5^0 such that x(0,0) e A + 5B implies, 
for each generalized solution, that x(t,j) e A + sB for all (£,;) e dom x , and 

• (before attractive AonO ) any generalized solution from fl (C U D ) is bounded and if 
its time domain is unbounded, so it converges to A . 

According to one of the principal results in (Cai et aL, 2006) there exists a smooth Lyapunov 
function for (/ ' ,g f C,D,<d,Q)if and only if the set A is pre-stable and pre-attractive on and is 
forward invariant (i.e., x(0,0) e fl (C U D ) implies x(f,;) e tf for all (£,;') g rfom x ). 
One of the primary interests in inverse Lyapunov theorems is that they can be employed to 
establish the robustness of the asymptotic stability of various types of perturbations. 

4. Hybrid control application 

In system theory in the 60s researchers were discussing mathematical frameworks so to 
study systems with continuous and discrete dynamics. Current approaches to hybrid 
systems differ with respect to the emphasis on or the complexity of the continuous and 
discrete dynamics, and on whether they emphasize analysis and synthesis results or 
analysis only or simulation only. On one end of the spectrum there are approaches to hybrid 
systems that represent extensions of system theoretic ideas for systems (with continuous- 
valued variables and continuous time) that are described by ordinary differential equations 
to include discrete time and variables that exhibit jumps, or extend results to switching 
systems. Typically these approaches are able to deal with complex continuous dynamics. 
Their main emphasis has been on the stability of systems with discontinuities. On the other 
end of the spectrum there are approaches to hybrid systems embedded in computer science 
models and methods that represent extensions of verification methodologies from discrete 
systems to hybrid systems. Several approaches to robustness of asymptotic stability and 
synthesis of hybrid control systems are represented in this section. 

4.1 Hybrid stabilization implies input-to-state stabilization 

In the paper (Sontag, 1989) it has been shown, for continuous-time control systems, that 
smooth stabilization involves smooth input-to-stat stabilization with respect to input 
additive disturbances. The proof was based on converse Lyapunov theorems for 
continuous-time systems. According to the indications of (Cai et aL, 2006), and (Cai et al. 
2007), the result generalizes to hybrid control systems via the converse Lyapunov theorem. 
In particular, if we can find a hybrid controller, with the type of regularity used in sections 

4.2 and 4.3, to achieve asymptotic stability, then the input-to-state stability with respect to 
input additive disturbance can also be achieved. 

Here, consider the special case where the hybrid controller is a logic-based controller where 
the variable takes values in the logic of a finite set. Consider the hybrid control system 



9f:= 



k = m + r\ q mu q+ »d) ^eC q ,qeQ 



(9) 



32 



Robust Control, Theory and Applications 



where Q is a finite index set, for each q e Q , f , rj : C ^> 9? w are continuous functions, 
C and D are closed and G has a closed graph and is locally bounded. The signal u is the 
control, and d is the disturbance, while u- is vector that is independent of the state, input, 
and disturbance. Suppose 9£ is stabilizable by logic-based continuous feedback; that is, for 
the case where d = , there exist continuous functions k n defined on C n such that, with 
u := k q (Z) , the nonempty and compact set A = \J qA x |^| is pre-stable and globally pre- 
attractive. Converse Lyapunov theorems can then be used to establish the existence of a 
logic-based continuous feedback that renders the closed-loop system input-to-state stable 
with respect to d . The feedback has the form 



= *,G)-s.i£(!;)VV,(!;) 



(10) 



where e ^ and V (£) is a smooth Lyapunov function that follows from the assumed 
asymptotic stability when d = . There exist class- ^ functions 04 and oc 2 such that, with 
this feedback control, the following estimate holds: 



M. 



Mt,j) 



<max 



r 1 (2.exp(- f -;).a 2 (|^(0,0)|^ (00) )) / a^ 



2.8 " "°° 



(11) 



where f^ :=sup {8fi)edomd \d(s,i 



4.2 Control Lyapunov functions 

Although the control design using a continuously differentiable control-Lyapunov function 
is well established for input-affine nonlinear control systems, it is well known that not all 
controllable input-affine nonlinear control system function admits a continuously 
differentiable control-Lyapunov function. A well known example in the absence of this 
control-Lyapunov function is the so-called "Brockett", or "nonholonomic integrator". 
Although this system does not allow continuously differentiable control Lyapunov function, 
it has been established recently that admits a good "patchy" control-Lyapunov function. 
The concept of control-Lyapunov function, which was presented in (Goebel et al., 2009), is 
inspired not only by the classical control-Lyapunov function idea, but also by the approach 
to feedback stabilization based on patchy vector fields proposed in (Ancona & Bressan, 
1999). The idea of control-Lyapunov function was designed to overcome a limitation of 
discontinuous feedbacks, such as those from patchy feedback, which is a lack of robustness 
to measurement noise. In (Goebel et al., 2009) it has been demonstrated that any 
asymptotically controllable nonlinear system admits a smooth patchy control-Lyapunov 
function if we admit the possibility that the number of patches may need to be infinite. In 
addition, it was shown how to construct a robust stabilizing hybrid feedback from a patchy 
control-Lyapunov function. Here the idea when the number of patches is finite is outlined 
and then specialized to the nonholonomic integrator. 

Generally , a global patchy smooth control-Lyapunov function for the origin for the control 
system x = f(x,u) in the case of a finite number of patches is a collection of functions V and 
sets Q and f2' where q e Q := { 1, . . . , m } , such as 
a. for each q e Q , Q and f2' q are open and 
. tf:=9r\{0} = U^=U, £Q ^ 
• for each q e Q , the outward unit normal to dQ q is continuous on ( df2 q \ U r ^ q 0' r J fl , 



Robust Control of Hybrid Systems 33 

• for each;? e Q , f2' q f)0 a Q q ; 

b. for each q e Q , V is a smooth function defined on a neighborhood (relative to ) 
oiQ q . 

c. there exist a continuous positive definite function a and class- %^ functions y and 
y such that 

. y(|4<V„(x)<y(|*|) V^c,eQ, xe ^\[j^n' r )f]0; 

• for each q e Q and x e lfi\\J ry Q[ 1 there exists u x such that 

(W q (x),f(x,u x ,q))<-a(x) 

• for each q e Q and x e ( /2 \ U r ^ (? ^ ) fl there exists u x such that 

(v^(x),/(x, Wx ,^))<-a(x) 

( n q ( X )<f( X > U x><l))^- a ( X ) 

where x h^ n (x) denotes the outward unit normal to dQ . 

From this patchy control-Lyapunov function one can construct a robust hybrid feedback 
stabilizer, at least when the set { u,v.f(x,u) < c } is convex for each real number c and every 
real vector u , with the following data 



u,:=k q (x), Cq =(n q \{J ryq n' r )C)0 (12) 



where k n is defined on C n , continuous and such that 



(VV q (x),f(x,k q (x)))<-0.5a(x) \/xcC q 

(n q (x),f(x,k x (x)))<-0.5a(x) V* e (a/2 ? \U r ^ n' r )f]0 

The jump set is given by 

D q =(o\n q )u(\j r ^a r f)o) (14) 

and the jump map is 

G q (x) = \) _L " S l ryq ' ' (15) 

[{reQ:xen' r {]0} xeO\n q 

With this control, the index increases with each jump except probably the first one. Thus, the 
number of jumps is finite, and the state converges to the origin, which is also stable. 

4.3 Throw-and-catch control 

In ( Prieur, 2001), it was shown how to combine local and global state feedback to achieve 
global stabilization and local performance. The idea, which exploits hysteresis switching 
(Halbaoui et al., 2009b), is completely simple. Two continuous functions, k lobal and k local 
are shown when the feedback u = k lobal (x) render the origin of the control system 
x = f(x,u) globally asymptotically stable whereas the feedback u = k local (x) makes the 



34 Robust Control, Theory and Applications 

origin of the control system locally asymptotically stable with basin of attraction containing 
the open set , which contains the origin. Then we took C local a compact subset of the 
that contains the origin in its interior and one takes D global to be a compact subset of C local , 
again containing the origin in its interior and such that, when using the controller k local , 
trajectories starting in D loM never reach the boundary of C local (Fig. 6). Finally, the hybrid 
control which achieves global asymptotic stabilization while using the controller k for 
small signals is as follows 

w-kJx) C:={ (x,q):xeC\ 

(16) 
g{q,x) := toggle (q) D := | (x,q) : x € D q j 

In the problem of uniting of local and global controllers, one can view the global controller 
as a type of "bootstrap" controller that is guaranteed to bring the system to a region where 
another controller can control the system adequately. 

A prolongation of the idea of combine local and global controllers is to assume the existence 
of continuous bootstrap controller that is guaranteed to introduce the system, in finite time, 
in a vicinity of a set of points, not simply a vicinity of the desired final destination (the 
controller doesn't need to be able to maintain the state in this vicinity); moreover, these sets 
of points form chains that terminate at the desired final destination and along which 
controls are known to steer (or "throw") form one point in the chain at the next point in the 
chain. Moreover, in order to minimize error propagation along a chain, a local stabilizer is 
known for each point, except perhaps those points at the start of a chain. Those can be 
employed "to catch" each jet. 



D ghba! 




Trajectory due to local 
controller 



Fig. 6. Combining local and global controllers 

4.4 Supervisory control 

In this section, we review the supervisory control framework for hybrid systems. One of the 
main characteristics of this approach is that the plant is approximated by a discrete-event 
system and the design is carried out in the discrete domain. The hybrid control systems in 
the supervisory control framework consist of a continuous (state, variable) system to be 
controlled, also called the plant, and a discrete event controller connected to the plant via an 
interface in a feedback configuration as shown in (Fig. 7). It is generally assumed that the 
dynamic behavior of the plant is governed by a set of known nonlinear ordinary differential 
equations 

x(t) = f(x(t),r(t)) (17) 



Robust Control of Hybrid Systems 



35 



where x e 9? n is the continuous state of the system and re 9? m is the continuous control 
input. In the model shown in (Fig. 7), the plant contains all continuous components of the 
hybrid control system, such as any conventional continuous controllers that may have been 
developed, a clock if time and synchronous operations are to be modeled, and so on. The 
controller is an event driven, asynchronous discrete event system (DES), described by a 
finite state automaton. The hybrid control system also contains an interface that provides 
the means for communication between the continuous plant and the DES controller. 



Discrete 


DES Supervisor 


Envent system 


IE 


▼ 




[ 

Interface 


Control 
Switch 


Event 
recognizer 










^ JL_ 




Continuous varic 
system 


ible 






































Controlled system 







Fig. 7. Hybrid system model in the supervisory control framework. 




Fig. 8. Partition of the continuous state space. 

The interface consists of the generator and the actuator as shown in (Fig. 7). The generator 
has been chosen to be a partitioning of the state space (see Fig. 8). The piecewise continuous 
command signal issued by the actuator is a staircase signal as shown in (Fig. 9), not unlike 
the output of a zero-order hold in a digital control system. The interface plays a key role in 
determining the dynamic behavior of the hybrid control system. Many times the partition of 
the state space is determined by physical constraints and it is fixed and given. 
Methodologies for the computation of the partition based on the specifications have also 
been developed. 

In such a hybrid control system, the plant taken together with the actuator and generator, 
behaves like a discrete event system; it accepts symbolic inputs via the actuator and 
produces symbolic outputs via the generator. This situation is somewhat analogous to the 



36 



Robust Control, Theory and Applications 



tjl] t c [2]t c [3] 



time 



Fig. 9. Command signal issued by the interface. 

way a continuous time plant, equipped with a zero-order hold and a sampler, " looks " like a 
discrete-time plant. The DES which models the plant, actuator, and generator is called the 
DES plant model. From the DES controller's point of view, it is the DES plant model which 
is controlled. 

The DES plant model is an approximation of the actual system and its behavior is an 
abstraction of the system's behavior. As a result, the future behavior of the actual continuous 
system cannot be determined uniquely, in general, from knowledge of the DES plant state 
and input. The approach taken in the supervisory control framework is to incorporate all the 
possible future behaviors of the continuous plant into the DES plant model. A conservative 
approximation of the behavior of the continuous plant is constructed and realized by a finite 
state machine. From a control point of view this means that if undesirable behaviors can be 
eliminated from the DES plant (through appropriate control policies) then these behaviors 
will be eliminated from the actual system. On the other hand, just because a control policy 
permits a given behavior in the DES plant, is no guarantee that that behavior will occur in 
the actual system. 

We briefly discuss the issues related to the approximation of the plant by a DES plant model. 
A dynamical system Z can be described as a triple T;W;B with T cz 9? the time axis, W the 
signal space, and B cz W T (the set of all functions / : T -» W ) the behavior. The behavior of the 
DES plant model consists of all the pairs of plant and control symbols that it can generate. 
The time axis T represents here the occurrences of events. A necessary condition for the 
DES plant model to be a valid approximation of the continuous plant is that the behavior of 
the continuous plant model B c is contained in the behavior of the DES plant model, i.e. 

The main objective of the controller is to restrict the behavior of the DES plant model in 
order to specify the control specifications. The specifications can be described by a 
behavior B . Supervisory control of hybrid systems is based on the fact that if undesirable 
behaviors can be eliminated from the DES plant then these behaviors can likewise be eliminated from 
the actual system. This is described formally by the relation 



B rf flB s cB ; 



spec ' 



•B c nB s cB : 



spec 



(18) 



and is depicted in (Fig. 10). The challenge is to find a discrete abstraction with behavior Bd 
which is a approximation of the behavior B c of the continuous plant and for which is 
possible to design a supervisor in order to guarantee that the behavior of the closed loop 
system satisfies the specifications B spe c. A more accurate approximation of the plant's 
behavior can be obtained by considering a finer partitioning of the state space for the 
extraction of the DES plant. 



Robust Control of Hybrid Systems 37 




Fig. 10. The DES plant model as an approximation. 

An interesting aspect of the DES plant's behavior is that it is distinctly nondeterministic. 
This fact is illustrated in (Fig.ll). The figure shows two different trajectories generated by 
the same control symbol. Both trajectories originate in the same DES plant state p 1 . (Fig.ll) 
shows that for a given control symbol, there are at least two possible DES plant states that 
can be reached from p 1 . Transitions within a DES plant will usually be nondeterministic 
unless the boundaries of the partition sets are invariant manifolds with respect to the vector 
fields that describe the continuous plant. 



n 




Fig. 11. Nondeterminism of the DES plant model. 

There is an advantage to having a hybrid control system in which the DES plant model is 
deterministic. It allows the controller to drive the plant state through any desired sequence 
of regions provided, of course, that the corresponding state transitions exist in the DES plant 
model. If the DES plant model is not deterministic, this will not always be possible. This is 
because even if the desired sequence of state transitions exists, the sequence of inputs which 
achieves it may also permit other sequences of state transitions. Unfortunately, given a 
continuous-time plant, it may be difficult or even impossible to design an interface that 
leads to a DES plant model which is deterministic. Fortunately, it is not generally necessary 
to have a deterministic DES plant model in order to control it. The supervisory control 
problem for hybrid systems can be formulated and solved when the DES plant model is 
nondeterministic. This work builds upon the frame work of supervisory control theory used 
in (Halbaoui et al, 2008) and (Halbaoui et al, 2009a). 

5. Robustness to perturbations 

In control systems, several perturbations can occur and potentially destroy the good 
behavior for which the controller was designed for. For example, noise in the measurements 



38 Robust Control, Theory and Applications 

of the state taken by controller arises in all implemented systems. It is also common that 
when a controller is designed, only a simplified model of the system to control exhibiting 
the most important dynamics is considered. This simplifies the control design in general. 
However, sensors/ actuators that are dynamics unmodelled can substantially affect the 
behavior of the system when in the loop. In this section, it is desired that the hybrid 
controller provides a certain degree of robustness to such disturbances. In the following 
sections, general statements are made in this regard. 

5.1 Robustness via filtered measurements 

In this section, the case of noise in the measurements of the state of the nonlinear system is 
considered. Measurement noise in hybrid systems can lead to nonexistence of solutions. 
This situation can be corrected, at least for the small measurement noise, if under global 
existence of solutions, C c and D c always " overlap" while ensuring that the stability 
properties still hold. The "overlap" means that for every £, e O , either £, + e e C c or £, + e e D c 
all or small e . There exist generally always inflations of C and D that preserve the 
semiglobal practices asymptotic stability, but they do not guarantee the existence of 
solutions for small measurement noise. 

Moreover, the solutions are guaranteed to exist for any locally bounded measurement noise 
if the measurement noise does not appear in the flow and jump sets. This can be carried out 
by filtering measures. (Fig. 12) illustrates this scenario. The state x is corrupted by the noise 
e and the hybrid controller H c measures a filtered version of x + e . 



Controller ^ 

I 



x 



f 



Filter 



4- 



— > Hybrid system 




t; 



Fig. 12. Closed-loop system with noise and filtered measurements. 

The filter used for the noisy output y = x + e is considered to be linear and defined by the 
matrices A* ,Br , and L, , and an additional parameter s * > . It is designed to be 
asymptotically stable. Its state is denoted by x^ which takes value in R f . At the jumps, x^ 
is given to the current value of y . Then, the filter has flows given by 

sir = AcXc + Bey, (19) 

and jumps given by 

x}=AJ 1 B f x f + B f y. (20) 

The output of the filter replaces the state x in the feedback law. The resulting closed-loop 
system can be interpreted as family of hybrid systems which depends on the parameter 8 r . 
It is denoted by Hj and is given by 



Robust Control of Hybrid Systems 



39 



n cl 



x = f p (x + K(L f x f ,x c )) 

X c = fc(^f X f' X c) 

SrXr = ArXr + B r (X + tf) 

x + =x 

X t^ G c( L f X f> X c) 

xj = -AJ 1 B f (x + e) 



> (L f x f ,x c )eC c 



(L f x f ,x c )eD c 



(21) 



5.2 Robustness to sensor and actuator dynamics 

This section reviews the robustness of the closed-loop H cl when additional dynamics, 
coming from sensors and actuators, are incorporated. (Fig. 13) shows the closed loop H cl 
with two additional blocks: a model for the sensor and a model for the actuator. Generally, 
to simplify the controller design procedure, these dynamics are not included in the model of 
the system x = f Jx,u) when the hybrid controller H c is conceived. Consequently, it is 
important to know whether the stability properties of the closed-loop system are preserved, 
at least semiglobally and practically, when those dynamics are incorporated in the closed 
loop. 

The sensor and actuator dynamics are modeled as stable filters. The state of the filter which 
models the sensor dynamics is given by x s e R n$ with matrices (A S ,B S ,L S ) , the state of the 
filter that models the actuator dynamics is given by x a e R Ua with matrices (A a ,B a ,L a ) , and 
£ d > is common to both filters. 

Augmenting H cl by adding filters and temporal regularization leads to a family H & c f given 
as follows 



n cl • 



X = 


f P (x>L r 


*a) 




x c 


= fc( L s x s 


^c) 




T = 


-T + T 






^ 


■s = 4* s 


+ B s (x + e) 




£^ 


'a ~ ^a x a 


+ B a< L s X s 


< X c) 


x + 


= X 






x : 


eG c (L s x 


s> X c) 




< 


= X S 






< 


= X a 







T + =0 



( L s X s> X c)^ C c Or T<T 



(22) 



(L s x s ,x c )eD c and x>x 



where x is a constant satisfying x > x . 

The following result states that for fast enough sensors and actuators, and small enough 
temporal regularization parameter, the compact set A is semiglobally practically 
asymptotically stable. 



40 



Robust Control, Theory and Applications 



Controller k~ 



Actuator 



t: 



Sensor 



Hyi 



brid system 




Fig. 13. Closed-loop system with sensor and actuator dynamics. 

5.3 Robustness to sensor dynamics and smoothing 

In many hybrid control applications, the state of the controller is explicitly given as a 
continuous state £, and a discrete state q eQ:= {l,...,n} , that is, x c := [£, q] T . Where this is the 
case and the discrete state q chooses a different control law to be applied to the system for 
for various values of q, then the control law generated by the hybrid controller H c can 
have jumps when q changes. In many scenarios, it is not possible for the actuator to switch 
between control laws instantly. In addition, particularly when the control law k(> -,q) is 
continuous for each q e Q , it is desired to have a smooth transition between them when q 
changes. 



k 1 ^ ffi 
Smoothing 



Controller <- 



Sensor 



Hybrid system ' 



Fig. 14. Closed-loop system with sensor dynamics and control smoothing. 

(Fig. 14) shows the closed-loop system, noted that H & c f , resulting from adding a block that 
makes the smooth transition between control laws indexed by q and indicated by K q . The 
smoothing control block is modeled as a linear filter for the variable q . It is defined by the 
parameter s M and the matrices (A U ,B U ,L U ) . 
The output of the control smoothing block is given by 



a(x,x c ,L u x u ) 






x c>i) 



(23) 



where for each qeQ,X :R—>[0,1], is continuous and X (q) = l . Note that the output is 
such that the control laws are smoothly "blended" by the function X . 

In addition to this block, a filter modeling the sensor dynamics is also incorporated as in 
section 5.2. The closed loop Hj can be written as 



Robust Control of Hybrid Systems 



41 



Kl : 



x = f p (x + a(x,x c ,L u x u )) 

X c=fc( L s X s> X c) 

4 = 

T = -T + T 

8 M x s = A s x s + B s (x) 

Vu = A u x u + B uq 

x + =x 

< = X s 
X u = X u 
T + =0 



( L s x s' x c) gC c or t<t 



G c (L s x s ,x c ) 



(24) 



( L S * S '*c) eD c and T ^ T 



6. Conclusion 

In this chapter, a dynamic systems approach to analysis and design of hybrid systems has 

been continued from a robust control point of view. Stability and convergence tools for 

hybrid systems presented include hybrid versions of the traditional Lyapunov stability 

theorem and of LaSalle's invar iance principle. 

The robustness of asymptotic stability for classes of closed-loop systems resulting from 

hybrid control was presented. Results for perturbations arising from the presence of 

measurement noise, unmodeled sensor and actuator dynamics, control smoothing. 

It is very important to have good software tools for the simulation, analysis and design of 

hybrid systems, which by their nature are complex systems. Researchers have recognized 

this need and several software packages have been developed. 



7. References 

Rajeev, A.; Thomas, A. & Pei-Hsin, H.(1993). Automatic symbolic verification of embedded 

systems, In IEEE Real-Time Systems Symposium, 2-11, DOI: 

10.1109/REAL.1993.393520 . 
Dang, T. (2000). Verification et Synthese des Systemes Hybrides. PhD thesis, Institut National 

Poly technique de Grenoble. 
Girard, A. (2006). Analyse algorithmique des systemes hybrides. PhD thesis, Universite Joseph 

Fourier (Grenoble-I). 
Ancona, F. & Bressan, A. (1999). Patchy vector fields and asymptotic stabilization, ESAIM: 

Control, Optimisation and Calculus of Variations, 4:445-471, DOI: 

10.1051/ cocv:2004003. 
Byrnes, C. I. & Martin, C. F. (1995). An integral-invariance principle for nonlinear systems, IEEE 

Transactions on Automatic Control, 983-994, ISSN: 0018-9286. 



42 Robust Control, Theory and Applications 

Cai, C; Teel, A. R. & Goebel, R. (2007). Results on existence of smooth Lyapunov functions for 

asymptotically stable hybrid systems with nonopen basin of attraction, submitted to the 

2007 American Control Conference, 3456 - 3461, ISSN: 0743-1619. 
Cai, C; Teel, A. R. & Goebel, R. (2006). Smooth Lyapunov functions for hybrid systems Part I: 

Existence is equivalent to robustness & Part II: (Pre-) asymptotically stable compact 

sets, 1264-1277, ISSN 0018-9286. 
Cai, C; Teel, A. R. & Goebel, R. (2005). Converse Lyapunov theorems and robust asymptotic 

stability for hybrid systems, Proceedings of 24th American Control Conference, 12-17, 

ISSN: 0743-1619. 
Chellaboina, V.; Bhat, S. P. & HaddadWH. (2002). An invariance principle for nonlinear hybrid 

and impulsive dynamical systems. Nonlinear Analysis, Chicago, IL, USA, 3116 - 

3122,ISBN: 0-7803-5519-9. 
Goebel, R.; Prieur, C. & Teel, A. R. (2009). smooth patchy control Lyapunov functions. 

Automatica (Oxford) Y, 675-683 ISSN : 0005-1098. 
Goebel, R. & Teel, A. R. (2006). Solutions to hybrid inclusions via set and graphical convergence 

with stability theory applications. Automatica, 573-587, DOI: 

10.1016/j.automatica.2005.12.019. 
LaSalle, J. P. (1967). An invariance principle in the theory of stability, in Differential equations and 

dynamical systems. Academic Press, New York. 
LaSalle, J. P. (1976) The stability of dynamical systems. Regional Conference Series in Applied 

Mathematics, SIAM ISBN-13: 978-0-898710-22-9. 
Lygeros, J.; Johansson, K. H., Simi'c, S. N.; Zhang, J. & Sastry, S. S. (2003). Dynamical 

properties of hybrid automata. IEEE Transactions on Automatic Control, 2-17 ,ISSN: 

0018-9286. 
Prieur, C. (2001). Uniting local and global controllers with robustness to vanishing noise, 

Mathematics Control, Signals, and Systems, 143-172, DOI: 10.1007/ PL00009880 
Ryan, E. P. (1998). An integral invariance principle for differential inclusions with applications in 

adaptive control. SIAM Journal on Control and Optimization, 960-980, ISSN 0363- 

0129. 
Sanfelice, R. G.; Goebel, R. & Teel, A. R. (2005). Results on convergence in hybrid systems via 

detectability and an invariance principle. Proceedings of 2005 American Control 

Conference, 551-556, ISSN: 0743-1619. 
Sontag, E. (1989). Smooth stabilization implies coprime factorization. IEEE Transactions on 

Automatic Control, 435-443, ISSN: 0018-9286. 
DeCarlo, R.A.; Branicky, M.S.; Pettersson, S. & Lennartson, B.(2000). Perspectives and results 

on the stability and stabilizability of hybrid systems. Proc. of IEEE, 1069-1082, ISSN: 

0018-9219. 
Michel, A.N.(1999). Recent trends in the stability analysis of hybrid dynamical systems. IEEE 

Trans. Circuits Syst. - I. Fund. Theory AppL, 120-134,ISSN: 1057-7122. 
Halbaoui, K.; Boukhetala, D. and Boudjema, F.(2008). New robust model reference adaptive 

control for induction motor drives using a hybrid con troller. International Symposium on 

Power Electronics, Electrical Drives, Automation and Motion, Italy, 1109 - 1113 

ISBN: 978-1-4244-1663-9. 
Halbaoui, K.; Boukhetala, D. and Boudjema, F. (2009a). Speed Control of Induction Motor 

Drives Using a New Robust Hybrid Model Reference Adaptive Controller. Journal of 

Applied Sciences, 2753-2761, ISSN:18125654. 
Halbaoui, K.; Boukhetala, D. and Boudjema, F. (2009b). Hybrid adaptive control for speed 

regulation of an induction motor drive, Archives of Control Sciences, V2. 



Robust Stability and Control of 

Linear Interval Parameter Systems 

Using Quantitative (State Space) and 

Qualitative (Ecological) Perspectives 

Rama K. Yedavalli and Nagini Devarakonda 

The Ohio State University 
United States of America 



1. Introduction 



The problem of maintaining the stability of a nominally stable linear time invariant system 
subject to linear perturbation has been an active topic of research for quite some time. The 
recent published literature on this v robust stability 7 problem can be viewed mainly from two 
perspectives, namely i) transfer function (input/ output) viewpoint and ii) state space 
viewpoint. In the transfer function approach, the analysis and synthesis is essentially carried 
out in frequency domain, whereas in the state space approach it is basically carried out in 
time domain. Another perspective that is especially germane to this viewpoint is that the 
frequency domain treatment involves the extensive use of "polynomial 7 theory while that of 
time domain involves the use of 'matrix' theory. Recent advances in this field are surveyed 
in [l]-[2]. 

Even though in typical control problems, these two theories are intimately related and 
qualitatively similar, it is also important to keep in mind that there are noteworthy 
differences between these two approaches ('polynomial' vs 'matrix 7 ) and this chapter (both 
in parts I and II) highlights the use of the direct matrix approach in the solution to the robust 
stability and control design problems. 

2. Uncertainty characterization and robustness 

It was shown in [3] that modeling errors can be broadly categorized as i) parameter 
variations, ii) unmodeled dynamics iii) neglected nonlinearities and finally iv) external 
disturbances. Characterization of these modeling errors in turn depends on the 
representation of dynamic system, namely whether it is a frequency domain, transfer 
function framework or time domain state space framework. In fact, some of these can be 
better captured in one framework than in another. For example, it can be argued 
convincingly that real parameter variations are better captured in time domain state space 
framework than in frequency domain transfer function framework. Similarly, it is intuitively 
clear that unmodeled dynamics errors can be better captured in the transfer function 
framework. By similar lines of thought, it can be safely agreed that while neglected 
nonlinearities can be better captured in state space framework, neglected disturbances can 



44 Robust Control, Theory and Applications 

be captured with equal ease in both frameworks. Thus it is not surprising that most of the 
robustness studies of uncertain dynamical systems with real parameter variations are being 
carried out in time domain state space framework and hence in this chapter, we emphasize 
the aspect of robust stabilization and control of linear dynamical systems with real 
parameter uncertainty. 

Stability and performance are two fundamental characteristics of any feedback control 
system. Accordingly, stability robustness and performance robustness are two desirable 
(sometimes necessary) features of a robust control system. Since stability robustness is a 
prerequisite for performance robustness, it is natural to address the issue of stability 
robustness first and then the issue of performance robustness. 

Since stability tests are different for time varying systems and time invariant systems, it is 
important to pay special attention to the nature of perturbations, namely time varying 
perturbations versus time invariant perturbations, where it is assumed that the nominal 
system is a linear time invariant system. Typically, stability of linear time varying systems is 
assessed using Lyapunov stability theory using the concept of quadratic stability whereas 
that of a linear time invariant system is determined by the Hurwitz stability, i.e. by the 
negative real part eigenvalue criterion. This distinction about the nature of perturbation 
profoundly affects the methodologies used for stability robustness analysis. 
Let us consider the following linear, homogeneous, time invariant asymptotically stable 
system in state space form subject to a linear perturbation E: 

x = (A +E)x x(0) = x (1) 

where Aq is an nxn asymptotically stable matrix and E is the error (or perturbation) matrix. 
The two aspects of characterization of the perturbation matrix E which have significant 
influence on the scope and methodology of any proposed analysis and design scheme are i) 
the temporal nature and ii) the boundedness nature of E. Specifically, we can have the 
following scenario: 
i. Temporal Nature: 

Time invariant error Time varying error 

E = constant E = E(t) 

ii. Boundedness Nature: 

Unstructured Structured 

vs 
(Norm bounded) (Elemental bounds) 

The stability robustness problem for linear time invariant systems in the presence of linear 
time invariant perturbations (i.e. robust Hurwitz invariance problem) is basically addressed 
by testing for the negativity of the real parts of the eigenvalues (either in frequency domain 
or in time domain treatments), whereas the time varying perturbation case is known to be 
best handled by the time domain Lyapunov stability analysis. The robust Hurwitz 
invariance problem has been widely discussed in the literature essentially using the 
polynomial approach [4] -[5]. In this section, we address the time varying perturbation case, 
mainly motivated by the fact that any methodology which treats the time varying case can 
always be specialized to the time invariant case but not vice versa. However, we pay a price 
for the same, namely conservatism associated with the results when applied to the time 
invariant perturbation case. A methodology specifically tailored to time invariant 
perturbations is discussed and included by the author in a separate publication [6]. 



Robust Stability and Control of Linear Interval Parameter Systems 

Using Quantitative (State Space) and Qualitative (Ecological) Perspectives 45_ 

It is also appropriate to discuss, at this point, the characterization with regard to the 
boundedness of the perturbation. In the so called 'unstructured 7 perturbation, it is assumed 
that one cannot clearly identify the location of the perturbation within the nominal matrix 
and thus one has simply a bound on the norm of the perturbation matrix. In the 'structured' 
perturbation, one has information about the location(s) of the perturbation and thus one can 
think of having bounds on the individual elements of the perturbation matrix. This 
approach can be labeled as 'Elemental Perturbation Bound Analysis (EPBA)'. Whether 
'unstructured' norm bounded perturbation or 'structured' elemental perturbation is 
appropriate to consider depends very much on the application at hand. However, it can be 
safely argued that 'structured' real parameter perturbation situation has extensive 
applications in many engineering disciplines as the elements of the matrices of a linear state 
space description contain parameters of interest in the evolution of the state variables and it 
is natural to look for bounds on these real parameters that can maintain the stability of the 
state space system. 

3. Robust stability and control of linear interval parameter systems under 
state space framework 

In this section, we first give a brief account of the robust stability analysis techniques in 3.1 
and then in subsection 3.2 we discuss the robust control design aspect. 

3.1 Robust stability analysis 

The starting point for the problem at hand is to consider a linear state space system 
described by 

x(t) = [A +E]x(t) 

where x is an n dimensional state vector, asymptotically stable matrix and E is the 
'perturbation' matrix. The issue of 'stability robustness measures' involves the 
determination of bounds on E which guarantee the preservation of stability of (1). Evidently, 
the characterization of the perturbation matrix E has considerable influence on the derived 
result. In what follows, we summarize a few of the available results, based on the 
characterization of E. 

1. Time varying (real) unstructured perturbation with spectral norm: Sufficient bound 
For this case, the perturbation matrix E is allowed to be time varying, i.e. E(t) and a bound 
on the spectral norm (cr max (£(£)) where o(-) is the singular value of (•)) is derived. When a 
bound on the norm of E is given, we refer to it as 'unstructured' perturbation. This norm 
produces a spherical region in parameter space. The following result is available for this 
case [7]-[8]: 

<7 max (E(0)<-^— P) 

^max^) 

where P is the solution to the Lyapunov matrix 

PA +AlP + 2I = (3) 

See Refs [9], [10], [11] for results related to this case. 



46 Robust Control, Theory and Applications 

2. Time varying (real) structured variation 

Case 1: Independent variations (sufficient bound) [12] -[13] 

E„(^vJE, y (f)| =s V] (4) 

1 I J Imax J 

s = Max i :£ i : 
1 

€ ii < 7 T~ U eii ( 5 ) 

] a P U ) ] 

where P satisfies equation (3) and U y = etj/e. For cases when e,y are not known, one can take 
Ueij = | A ij \/\A ij | max- (-)m denotes the matrix with all modulus elements and (-) s denotes the 
symmetric part of (•). 

3. Time invariant, (real) structured perturbation E%j = Constant 

Case i: Independent Variations [13] -[15]: (Sufficient Bounds). For this case, E can be 
characterized as 

E = 8^82 (6) 

where Si and S2 are constant, known matrices and | D t y | < dyd with dy > are given and d > 
is the unknown. Let LI be the matrix elements Uij = dy. Then the bound on d is given by [13] 

d<^f- " — - — = »,=»Q H 

s 2 (y a ,i-A,)" 1 s 1 u) 

0>OV L J m J 

Notice that the characterization of E (with time invariant) in (4) is accommodated by the 

characterization in [15]. p(-) is the spectral radius of (•). 

Case ii: Linear Dependent Variation: For this case, E is characterized (as in (6) before), by 

E = Z- =1 AE, (8) 

and bounds on 1 13± | are sought. Improved bounds on 1 13± | are presented in [6]. 
This type of representation represents a 'poly tope of matrices' as discussed in [4]. In this 
notation, the interval matrix case (i.e. the independent variation case) is a special case of the 
above representation where Ei contains a single nonzero element, at a different place in the 
matrix for different z. 

For the time invariant, real structured perturbation case, there are no computationally 
tractable necessary and sufficient bounds either for polytope of matrices or for interval 
matrices (even for a 2 x 2 case). Even though some derivable necessary and sufficient 
conditions are presented in [16] for any general variation in E (not necessarily linear 
dependent and independent case), there are no easily computable methods available to 
determine the necessary and sufficient bounds at this stage of research. So most of the 
research, at this point of time, seems to aim at getting better (less conservative) sufficient 
bounds. The following example compares the sufficient bounds given in [13]-[15] for the 
linear dependent variation case. 



Robust Stability and Control of Linear Interval Parameter Systems 

Using Quantitative (State Space) and Qualitative (Ecological) Perspectives 



47 



Let us consider the example given in [15] in which the perturbed system matrix is given by 



(A +BKC) 
Taking the nominally stable matrix to be 



-2 + h 





-3 + k 7 



-1 + fc, 



-1 + k 1 -1 + k 2 -4 + k x 



-2 





-1 





-3 





-1 


-1 


-4 



the error matrix with k\ and fo as the uncertain parameters is given by 

LL = /C-j -til > /Co-tiO 

where 

The following are the bounds on \k\\ and | fo | obtained by [15] and the proposed method. 
Hv BQ ZK [14] jid[6] 



"1 1" 




"0 0" 





and E 2 = 


10 


10 1 




10 



0.815 



0.875 



1.55 



1.75 



3.2 Robust control design for linear systems with structured uncertainty 

Having discussed the robustness analysis issue above, we now switch our attention to the 
robust control design issue. Towards this direction, we now present a linear robust control 
design algorithm for linear deterministic uncertain systems whose parameters vary within 
given bounded sets. The algorithm explicitly incorporates the structure of the uncertainty 
into the design procedure and utilizes the elemental perturbation bounds developed above. 
A linear state feedback controller is designed by parameter optimization techniques to 
maximize (in a given sense) the elemental perturbation bounds for robust stabilization. 
There is a considerable amount of literature on the aspect of designing linear controllers for 
linear tine invariant systems with small parameter uncertainty. However, for uncertain 
systems whose dynamics are described by interval matrices (i.e., matrices whose elements 
are known to vary within a given bounded interval), linear control design schemes that 
guarantee stability have been relatively scarce. Reference [17] compares several techniques 
for designing linear controllers for robust stability for a class of uncertain linear systems. 
Among the methods considered are the standard linear quadratic regulator (LQR) design, 
Guaranteed Cost Control (GCC) method of [18], Multistep Guaranteed Cost Control 
(MGCC) of [17]. In these methods, the weighting on state in a quadratic cost function and 
the Riccati equation are modified in the search for an appropriate controller. Also, the 
parameter uncertainty is assumed to enter linearly and restrictive conditions are imposed on 
the bounding sets. In [18], norm inequalities on the bounding sets are given for stability but 



48 Robust Control, Theory and Applications 

they are conservative since they do not take advantage of the system structure. There is no 
guarantee that a linear state feedback controller exists. Reference [19] utilizes the concept of 
'Matching conditions (MC)' which in essence constrain the manner in which the uncertainty 
is permitted to enter into the dynamics and show that a linear state feedback control that 
guarantees stability exists provided the uncertainty satisfies matching conditions. By this 
method large bounding sets produce large feedback gains but the existence of a linear 
controller is guaranteed. But no such guarantee can be given for general 'mismatched' 
uncertain systems. References [20] and [21] present methods which need the testing of 
definiteness of a Lyapunov matrix obtained as a function of the uncertain parameters. In the 
multimodel theory approach, [22] considers a discrete set of points in the parameter 
uncertainty range to establish the stability. This paper addresses the stabilization problem 
for a continuous range of parameters in the uncertain parameter set (i.e. in the context of 
interval matrices). The proposed approach attacks the stability of interval matrix problem 
directly in the matrix domain rather than converting the interval matrix to interval 
polynomials and then testing the Kharitonov polynomials. 
Robust control design using perturbation bound analysis [23],[24] 
Consider a linear, time invariant system described by 

x = Ax + Bu x(0) = x 

Where x is nxl state vector, the control u is mxl. The matrix pair (A,B) is assumed to 

be completely controllable. 

U=Gx 

For this case, the nominal closed loop system matrix is given by 

A=A+BG , G 

and 

d-1 
KA + A T K - KB-Q-B T K + Q = 

Pc 

and A is asymptotically stable. 

Here G is the Riccati based control gain where Q,and Ro are any given weighting matrices 

which are symmetric, positive definite and p c is the design variable. 

The main interest in determining G is to keep the nominal closed loop system stable. The 

reason Riccati approach is used to determine G is that it readily renders (A+BG) 

asymptotically stable with the above assumption on Q and Ro. 

Now consider the perturbed system with linear time varying perturbations Ea(0 and Eg(f) 

respectively in matrices A and B 

i.e., x = [A + E A (t)]x(t) + [B + E B (t)]u(t) 

Let AA and AB be the perturbation matrices formed by the maximum modulus deviations 
expected in the individual elements of matrices A and B respectively. Then one can write 

(Absolute variation) 



Robust Stability and Control of Linear Interval Parameter Systems 

Using Quantitative (State Space) and Qualitative (Ecological) Perspectives 49_ 

where e a is the maximum of all the elements in AA and z\, is the maximum of all elements in 
AB. Then the total perturbation in the linear closed loop system matrix of (10) with nominal 
control u = Gx is given by 

A = AA + ABG m =s a U ea +s b U eh G m 

Assuming the ratio is s h js a = s known, we can extend the main result of equation (3) to the 

linear state feedback control system of (9) and (10) and obtain the following design 

observation. 

Design observation 1: 

The perturbed linear system is stable for all perturbations bounded by s a and s h if 

■ = /* (9) 



and s h <s ju where 



°^[ P m{ U ea + £U eb G m)\ 



P(A + BG) + (A + BG) T P + 2I n = 



Remark: If we suppose AA = 0, AB = and expect some control gain perturbations AG, 
where we can write 



then stability is assured if 



AG = sAle R (10) 



s g < t^ - = M* (!1) 



°"max \* mPnr*- eg ) 



In this context jug can be regarded as a "gain margin". 

For a given s ai j and %• , one method of designing the linear controller would be to 
determine G of (3.10) by varying p c of (3.10) such that u is maximum. For an aircraft control 
example which utilizes this method, see Reference [9]. 

4. Robust stability and control of linear interval parameter systems using 
ecological perspective 

It is well recognized that natural systems such as ecological and biological systems are 
highly robust under various perturbations. On the other hand, engineered systems can be 
made highly optimal for good performance but they tend to be non-robust under 
perturbations. Thus, it is natural and essential for engineers to delve into the question of as 
to what the underlying features of natural systems are, which make them so robust and then 
try to apply these principles to make the engineered systems more robust. Towards this 
objective, the interesting aspect of qualitative stability in ecological systems is considered in 
particular. The fields of population biology and ecology deal with the analysis of growth 
and decline of populations in nature and the struggle of species to predominate over one 
another. The existence or extinction of a species, apart from its own effect, depends on its 
interactions with various other species in the ecosystem it belongs to. Hence the type of 
interaction is very critical to the sustenance of species. In the following sections these 



50 Robust Control, Theory and Applications 

interactions and their nature are thoroughly investigated and the effect of these qualitative 
interactions on the quantitative properties of matrices, specifically on three matrix 
properties, namely, eigenvalue distribution, normality/ condition number and robust 
stability are presented. This type of study is important for researchers in both fields since 
qualitative properties do have significant impact on the quantitative aspects. In the 
following sections, this interrelationship is established in a sound mathematical framework. 
In addition, these properties are exploited in the design of controllers for engineering 
systems to make them more robust to uncertainties such as described in the previous 
sections. 

4.1 Robust stability analysis using principles of ecology 
4.1.1 Brief review of ecological principles 

In this section a few ecological system principles that are of relevance to this chapter are 
briefly reviewed. Thorough understanding of these principles is essential to appreciate their 
influence on various mathematical results presented in the rest of the chapter. 
In a complex community composed of many species, numerous interactions take place. 
These interactions in ecosystems can be broadly classified as i) Mutualism, ii) Competition, 
iii) Commensalism/Ammensalism and iv) Predation (Parasitism). Mutualism occurs when 
both species benefit from the interaction. When one species benefits/ suffers and the other 
one remains unaffected, the interaction is classified as Commensalism/Ammensalism. 
When species compete with each other, that interaction is known as Competition. Finally, if 
one species is benefited and the other suffers, the interaction is known as Predation 
(Parasitism). In ecology, the magnitudes of the mutual effects of species on each other are 
seldom precisely known, but one can establish with certainty, the types of interactions that 
are present. Many mathematical population models were proposed over the last few 
decades to study the dynamics of eco/bio systems, which are discussed in textbooks [25]- 
[26]. The most significant contributions in this area come from the works of Lotka and 
Volterra. The following is a model of a predator-prey interaction where x is the prey and y is 
the predator. 

x = xf(x,y) 

jk »> (12) 

y = yg( x >y) 

where it is assumed that df(x,y) / dy < and dg(x,y) / dx > 

This means that the effect of y on the rate of change of x ( x ) is negative while the effect of x 
on the rate of change of y ( y ) is positive. 

The stability of the equilibrium solutions of these models has been a subject of intense study 
in life sciences [27]. These models and the stability of such systems give deep insight into the 
balance in nature. If a state of equilibrium can be determined for an ecosystem, it becomes 
inevitable to study the effect of perturbation of any kind in the population of the species on 
the equilibrium. These small perturbations from equilibrium can be modeled as linear state 
space systems where the state space plant matrix is the / J ac °Dian / . This means that 
technically in the Jacobian matrix, one does not know the actual magnitudes of the partial 
derivatives but their signs are known with certainty. That is, the nature of the interaction is 
known but not the strengths of those interactions. As mentioned previously, there are four 
classes of interactions and after linearization they can be represented in the following 



Robust Stability and Control of Linear Interval Parameter Systems 

Using Quantitative (State Space) and Qualitative (Ecological) Perspectives 



51 



Interaction type 


Digraph 
representation 


Matrix 
representation 


Mutualism 


O) 




'* +~ 

+ * 




Competition 


GC© 




_ * 




Commensalism 


ay-^s 




'* + ~ 
* 




Ammensalism 


&^S 




* 




Predation 
(Parasitism) 


(3Z© 




'* +~ 

_ * 





Table 1. Types of interactions between two species in an ecosystem 

In Table 1, column 2 is a visual representation of such interactions and is known as a 
directed graph or 'digraph' [28] while column 3 is the matrix representation of the 
interaction between two species. '*' represents the effect of a species on itself. 
In other words, in the Jacobian matrix, the 'qualitative' information about the species is 
represented by the signs +, - or 0. Thus, the (i,j) th entry of the state space (Jacobian) matrix 
simply consists of signs +, -, or 0, with the + sign indicating species j having a positive 
influence on species i, - sign indicating negative influence and indicating no influence. The 
diagonal elements give information regarding the effect of a species on itself. Negative sign 
means the species is 'self -regulatory', positive means it aids the growth of its own 
population and zero means that it has no effect on itself. For example, in the Figure 1 below, 
sign pattern matrices Ai and Ai are the Jacobian form while D\ and D2 are their 
corresponding digraphs. 



b) A 2 



a) 



0-000 
+ 0-00 
+ - - 

+ 0- 
+ 



+ + 
-00 
- - 





Fig. 1. Various sign patterns and their corresponding digraphs representing ecological 
systems; a) three species system b) five species system 



52 



Robust Control, Theory and Applications 



4.1.2 Qualitative or sign stability 

Since traditional mathematical tests for stability fail to analyze the stability of such 
ecological models, an extremely important question then, is whether it can be concluded, 
just from this sign pattern, whether the system is stable or not. If so, the system is said to be 
'qualitatively stable' [29-31]. In some literature, this concept is also labeled as 'sign stability'. 
In what follows, these two terms are used interchangeably. It is important to keep in mind 
that the systems (matrices) that are qualitatively (sign stable) stable are also stable in the 
ordinary sense. That is, qualitative stability implies Hurwitz stability (eigenvalues with 
negative real part) in the ordinary sense of engineering sciences. In other words, once a 
particular sign matrix is shown to be qualitatively (sign) stable, any magnitude can be inserted in 
those entries and for all those magnitudes the matrix is automatically Hurwitz stable. This is the 
most attractive feature of a sign stable matrix. However, the converse is not true. Systems 
that are not qualitatively stable can still be stable in the ordinary sense for certain 
appropriate magnitudes in the entries. From now on, to distinguish from the concept of 
'qualitative stability' of life sciences literature, the label of 'quantitative stability' for the 
standard Hurwitz stability in engineering sciences is used. 
These conditions in matrix theory notation are given below 
i. a u < V i 

ii. and a u < for at least one i 

iii. flfflfl <0 Vz,; i * j 

iv. a f/ -a- fc %...a mi = for any sequence of three or more distinct indices i,j,k, . . .m. 

v. Det(A) * 

vi. Color test (Elaborated in [32],[33]) 

Note: In graph theory a • -a •■ are referred to as /-cycles and a i :a- ]i a ]d ...a mi are referred to as 

^-cycles. In [34], [35], /-cycles are termed 'interactions' while fc-cycles are termed 

'interconnections' (which essentially are all zero in the case of sign stable matrices). 

With this algorithm, all matrices that are sign stable can be stored apriori as discussed in 

[36]. If a sign pattern in a given matrix satisfies the conditions given in the above papers 

(thus in the algorithm), it is an ecological stable sign pattern and hence that matrix is 

Hurwitz stable for any magnitudes in its entries. A subtle distinction between 'sign stable' 

matrices and 'ecological sign stable' matrices is now made, emphasizing the role of nature of 

interactions. Though the property of Hurwitz stability is held in both cases, ecosystems 

sustain solely because of interactions between various species. In matrix notation this means 

that the nature of off-diagonal elements is essential for an ecosystem. Consider a strictly 

upper triangular 3x3 matrix 



A = 



- + 
- 
- 



Cj5^- 




From quantitative viewpoint, it is seen that the matrix is Hurwitz stable for any magnitudes 
in the entries of the matrix. This means that it is indeed (qualitatively) sign stable. But since 
there is no predator-prey link and in fact no link at all between species 1&2 and 3&2, such a 



Robust Stability and Control of Linear Interval Parameter Systems 

Using Quantitative (State Space) and Qualitative (Ecological) Perspectives 53_ 

digraph cannot represent an ecosystem. Therefore, though a matrix is sign stable, it need not 
belong to the class of ecological sign stable matrices. In Figure 2 below, these various classes 
of sign patterns and the corresponding relationship between these classes is depicted. So, 
every ecological sign stable sign pattern is sign stable but the converse is not true. 
With this brief review of ecological system principles, the implications of these ecological 
qualitative principles on three quantitative matrix theory properties, namely eigenvalues, 
normality/ condition number and robust stability are investigated. In particular, in the next 
section, new results that clearly establish these implications are presented. As mentioned in 
the previous section, the motivation for this study and analysis is to exploit some of these 
desirable features of ecological system principles to design controllers for engineering 
systems to make them more robust. 



All sign patterns 

All stable sign patterns 

All ecologically stable 
sign patterns 




Fig. 2. Classification of sign patterns 

4.2 Ecological sign stability and its implications in quantitative matrix theory 

In this major section of this chapter, focusing on the ecological sign stability aspect discussed 
above, its implications in the quantitative matrix theory are established. In particular, the 
section offers three explicit contributions to expand the current knowledge base, namely i) 
Eigenvalue distribution of ecological sign stable matrices ii) Normality/ Condition number 
properties of sign stable matrices and iii) Robustness properties of sign stable matrices. 
These three contributions in turn help in determining the role of magnitudes in quantitative 
ecological sign stable matrices. This type of information is clearly helpful in designing 
robust controllers as shown in later sections. With this motivation, a 3-species ecosystem is 
thoroughly analyzed and the ecological principles in terms of matrix properties that are of 
interest in engineering systems are interpreted. This section is organized as follows: First, 
new results on the eigenvalue distribution of ecological sign stable matrices are presented. 
Then considering ecological systems with only predation-prey type interactions, it is shown 
how selection of appropriate magnitudes in these interactions imparts the property of 
normality (and thus highly desirable condition numbers) in matrices. In what follows, for 
each of these cases, concepts are first discussed from an ecological perspective and then later 
the resulting matrix theory implications from a quantitative perspective are presented 
Stability and eigenvalue distribution 

Stability is the most fundamental property of interest to all dynamic systems. Clearly, in 
time invariant matrix theory, stability of matrices is governed by the negative real part 



54 Robust Control, Theory and Applications 

nature of its eigenvalues. It is always useful to get bounds on the eigenvalue distribution of 

a matrix with as little computation as possible, hopefully as directly as possible from the 

elements of that matrix. It turns out that sign stable matrices have interesting eigenvalue 

distribution bounds. A few new results are now presented in this aspect. 

In what follows, the quantitative matrix theory properties for an n-species ecological system 

is established, i.e., an nxn sign stable matrix with predator-prey and commensal/ ammensal 

interactions is considered and its eigenvalue distribution is analyzed. In particular, various 

cases of diagonal elements' nature, which are shown to possess some interesting eigenvalue 

distribution properties, are considered. 

Bounds on real part of eigenvalues 

Based on several observations the following theorem for eigenvalue distribution along the 

real axis is stated. 

Theorem 1 [37] 

(Case of all negative diagonal elements): 

For all nxn sign stable matrices, with all negative diagonal elements, the bounds on the real farts of 

the eigenvalues are given as follows: 

The lower bound on the magnitude of the real part is given by the minimum magnitude diagonal 

element and the upper bound is given by the maximum magnitude diagonal element in the matrix. 

That is, for an nxn ecological sign stable matrix A = [a^], 

\a u \ . <\Re(X)\ . <\Re(X)\ <\a u \ (13) 

I "limn I v /Imin I v 'Imax I " Imax v ' 

Corollary 

(Case of some diagonal elements being zero): 

If the ecological sign stable matrix has zeros on the diagonal, the bounds are given by 

\a ti \ . (=0)<|Re(/l)| . <\Re(X)\ <\a u \ (14) 

I "ImmV / I V /Imin I v 'Imax I "Imax v ' 

The sign pattern in Example 1 has all negative diagonal elements. In this example, the case 
discussed in the corollary where one of the diagonal elements is zero, is considered. This 
sign pattern is as shown in the matrix below. 



A = 



0-0 

+ + 



Bounds on imaginary part of eigenvalues [38] 

Similarly, the following theorem can be stated for bounds on the imaginary parts of the 
eigenvalues of an nxn matrix. Before stating the theorem, we present the following lemma. 
Theorem 2 

For all nxn ecologically sign stable matrices, bound on the imaginary part of the eigenvalues 
is given by 



ju\ . = llmag (\)\ = V -a::a H Mi*i (15) 

^hmagss I & V i/| ma x \ \ *-* l J J 1 J 



Above results are illustrated in figure 3. 



Robust Stability and Control of Linear Interval Parameter Systems 

Using Quantitative (State Space) and Qualitative (Ecological) Perspectives 



55 



6 - 




-4.5 



-3.5 



-2.5 



-1.5 



-0.5 



0.5 



Fig. 3. Eigenvalue distribution for sign stable matrices 

Theorem 3 

For all nxn matrices, with all k-cycles being zero and with only commensal or ammensal interactions, 

the eigenvalues are simply the diagonal elements. 

It is clear that these theorems offer significant insight into the eigenvalue distribution of nxn 

ecological sign stable matrices. Note that the bounds can be simply read off from the 

magnitudes of the elements of the matrices. This is quite in contrast to the general 

quantitative Hurwitz stable matrices where the lower and upper bounds on the eigenvalues 

of a matrix are given in terms of the singular values of the matrix and/ or the eigenvalues of 

the symmetric part and skew-symmetric parts of the matrices (using the concept of field of 

values), which obviously require much computation, and are complicated functions of the 

elements of the matrices. 

Now label the ecological sign stable matrices with magnitudes inserted in the elements as 

'quantitative ecological sign stable matrices'. Note that these magnitudes can be arbitrary in 

each non zero entry of the matrix! It is interesting and important to realize that these 

bounds, based solely on sign stability, do not reflect diagonal dominance, which is the 

typical case with general Hurwitz stable matrices. Taking theorems 4, 5, 6 and their 

respective corollaries into consideration, we can say that it is the 'diagonal connectance' that 

is important in these quantitative ecological sign stable matrices and not the 'diagonal 

dominance' which is typical in the case of general Hurwitz stable matrices. This means that 

interactions are critical to system stability even in the case of general nxn matrices. 

Now the effect on the quantitative property of normality is presented. 

Normality and condition number 

Based on this new insight on the eigenvalue distribution of sign stable matrices, other matrix 

theory properties of sign stable matrices are investigated. The first quantitative matrix 

theory property is that of normality / condition number. But this time, the focus is only on 

ecological sign stable matrices with pure predator-prey links with no other types of 

interactions. 



56 



Robust Control, Theory and Applications 



A zero diagonal element implies that a species has no control over its growth/ decay rate. So 
in order to regulate the population of such a species, it is essential that, in a sign stable 
ecosystem model, this species be connected to at least one predator-prey link. In the case 
where all diagonal elements are negative, the matrix represents an ecosystem with all self- 
regulating species. If every species has control over its regulation, a limiting case for stability 
is a system with no inter speciel interactions. This means that there need not be any 
predator-prey interactions. This is a trivial ecosystem and such matrices actually belong to 
the only 'sign-stable' set, not to ecological sign stable set. 

Apart from the self-regulatory characteristics of species, the phenomena that contribute to 
the stability of a system are the type of interactions. Since a predator-prey interaction has a 
regulating effect on both the species, predator-prey interactions are of interest in this 
stability analysis. In order to study the role played by these interactions, henceforth focus is 
on systems with n-1 pure predator-prey links in specific places. This number of links and 
the specific location of the links are critical as they connect all species at the same time 
preserving the property of ecological sign stability. For a matrix A, pure predator-prey link 
structure implies that 

1. 4 7 A 7 ,<0 Mi,] 

2. A^ = iff A, =A ;7 =0 

Hence, in what follows, matrices with all negative diagonal elements and with pure 

predator-prey links are considered. 

Consider sign stable matrices with identical diagonal elements (negative) and pure 

predator-prey links of equal strengths. 

Normality in turn implies that the modal matrix of the matrix is orthogonal resulting in it 

having a condition number of one, which is an extremely desirable property for all matrices 

occurring in engineering applications. 

The property of normality is observed in higher order systems too. An ecologically sign 

stable matrix with purely predator-prey link interactions is represented by the following 

digraph for a 5-species system. The sign pattern matrix A represents this digraph. 




+ 



A = 



- + 00 


0" 


- - + 





- - + 





0-- 


+ 


0- 


- 



Theorem 4 

An nxn matrix A with equal diagonal elements and equal predation prey interaction strengths for 

each predation-prey link is a normal matrix. 

The property of k=1 is of great significance in the study of robustness of stable matrices. This 

significance will be explained in the next section eventually leading to a robust control 

design algorithm 

Robustness 

The third contribution of this section is related to the connection between ecological sign 

stability and robust stability in engineering systems. 



Robust Stability and Control of Linear Interval Parameter Systems 

Using Quantitative (State Space) and Qualitative (Ecological) Perspectives 57_ 

As mentioned earlier, the most interesting feature of ecological sign stable matrices is that the 

stability property is independent of the magnitude information in the entries of the matrix. 

Thus the nature of interactions, which in turn decide the signs of the matrix entries and their 

locations in the matrix, are sufficient to establish the stability of the given sign matrix. Clearly, 

it is this independence (or non-dependence) from magnitude information that imparts the 

property of robust stability to engineering systems. This aspect of robust stability in 

engineering systems is elaborated next from quantitative matrix theory point of view. 

Robustness as a result of independence from magnitude information 

In mathematical sciences, the aspect of 'robust stability' of families of matrices has been an 

active topic of research for many decades. This aspect essentially arises in many applications 

of system and control theory. When the system is described by linear state space 

representation, the plant matrix elements typically depend on some uncertain parameters 

which vary within a given bounded interval. 

Robust stability analysis of a class of interval matrices [39]: 

Consider the 'interval matrix family' in which each individual element varies independently 

within a given interval. Thus the interval matrix family is denoted by 

Ae[A L ,A li ] as the set of all matrices A that satisfy 



(a l ) <A f; <(A u ) for every i,j 



>l] J v >l] 



Now, consider a special 'class of interval matrix family' in which for each element that is 
varying, the lower bound i.e. (A L )y and the wpiper bound i.e. (A u )ij are of the same sign. 
For example, consider the interval matrix given by 







2 < a 12 < 5 


a 12 


a 13 


l<a 13 <4 








-3 < a 21 < -1 





%3_ 


-4 < a 31 < -2 
-5 < %, < -0.5 



with the elements an, an, 021, 031 and #33 being uncertain varying in some given intervals as 

follows: 

Qualitative stability as a 'sufficient condition' for robust stability of a class of interval 

matrices: A link between life sciences and engineering sciences 

It is clear that ecological sign stable matrices have the interesting feature that once the sign 
pattern is a sign stable pattern, the stability of the matrix is independent of the magnitudes 
of the elements of the matrix. That this property has direct link to stability robustness of 
matrices with structured uncertainty was recognized in earlier papers on this topic [32] and 
[33]. In these papers, a viewpoint was put forth that advocates using the 'qualitative 
stability' concept as a means of achieving 'robust stability' in the standard uncertain matrix 
theory and offer it as a 'sufficient condition' for checking the robust stability of a class of 
interval matrices. This argument is illustrated with the following examples. 
Consider the above given 'interval matrix'. 

Once it is recognized that the signs of the interval entries in the matrix are not changing 
(within the given intervals), the sign matrix can be formed. The v sign' matrix for this interval 
matrix is given by 



"0 


+ 


+ 


- 








- 





- 



58 Robust Control, Theory and Applications 



A-- 



The above 'sign 7 matrix is known to be 'qualitative (sign) stable'. Since sign stability is 
independent of magnitudes of the entries of the matrix, it can be concluded that the above 
interval matrix is robustly stable in the given interval ranges. Incidentally, if the 'vertex 
algorithm 7 of [40] is applied for this problem, it can be also concluded that this 'interval 
matrix family' is indeed Hurwitz stable in the given interval ranges. 

In fact, more can be said about the 'robust stability' of this matrix family using the 'sign stability' 
application. This matrix family is indeed robustly stable, not only for those given interval 
ranges above, but it is also robustly stable for any large 'interval ranges' in those elements as 
long as those interval ranges are such that the elements do not change signs in those interval ranges. 
In the above discussion, the emphasis was on exploiting the sign pattern of a matrix in 
robust stability analysis of matrices. Thus, the tolerable perturbations are direction sensitive. 
Also, no perturbation is allowed in the structural zeroes of the ecological sign stable 
matrices. In what follows, it is shown that ecological sign stable matrices can still possess 
superior robustness properties even under norm bounded perturbations, in which 
perturbations in structural zeroes are also allowed in ecological sign stable matrices. 
Towards this objective, the stability robustness measures of linear state space systems as 
discussed in [39] and [2] are considered. In other words, a linear state space plant matrix A, 
which is assumed to be Hurwitz stable, is considered. Then assuming a perturbation matrix 
E in the A matrix, the question as to how much of norm of the perturbation matrix E can be 
tolerated to maintain stability is asked. Note that in this norm bounded perturbation 
discussion, the elements of the perturbation matrix can vary in various directions without 
any restrictions on the signs of the elements of that matrix. When bounds on the norm of E 
are given to maintain stability, it is labeled as robust stability for unstructured, norm 
bounded uncertainty. We now briefly recall two measures of robustness available in the 
literature [2] for robust stability of time varying real parameter perturbations. 
Norm bounded robustness measures 
Consider a given Hurwitz stable matrix Aq with perturbation E such that 

A = A + E (16) 

where A is any one of the perturbed matrices. 

A sufficient bound u for the stability of the perturbed system is given on the spectral norm 

of the perturbation matrix as 



^ >(A(A )) L ^ (i7) 

K K 

where a s is the real part of the dominant eigenvalue, also known as stability degree and k is 
the condition number of the modal matrix of Aq. 

Theorem 5[38] 

|ReWA NN ))| min >|Re(A(B NN ))| min ^ 

i.e.,fi(A NN )>fi(B NN ) 



Robust Stability and Control of Linear Interval Parameter Systems 

Using Quantitative (State Space) and Qualitative (Ecological) Perspectives 59_ 

In other words, a unit norm, normal ecological sign stable matrix is more robust that a unit 

norm, normal non-ecological sign stable Hurwitz stable matrix. 

The second norm bound based on the solution of the Lyapunov matrix equation [7] is given 

as 

°max( P ) 

where 

P is the solution of the Lyapunov equation of the nominal stable matrix Ao given by 

A%P + PA +2I = 

Based on this bound, the following Lemma is proposed: 

Theorem 6 

The norm bound ju on a target SS matrix S is d, where d is the magnitude of diagonal element of S 

i.e., 

= d (20) 



This means that for any given value of \i v , we can, by mere observation, determine a corresponding 
stable matrix A! 

This gives impetus to design controllers that drive the closed loop system to a target matrix. 
Towards this objective, an algorithm for the design of a controller based on concepts from 
ecological sign stability is now presented. 

4.3 Robust control design based on ecological sign stability 

Extensive research in the field of robust control design has lead to popular control design 
methods in frequency domain such as H^ and //-synthesis., Though these methods perform 
well in frequency domain, they become very conservative when applied to the problem of 
accommodating real parameter uncertainty. On the other hand, there are very limited robust 
control design methods in time domain methods that explicitly address real parameter 
uncertainty [41-47]. Even these very few methods tend to be complex and demand some 
specific structure to the real parameter uncertainty (such as matching conditions). Therefore, 
as an alternative to existing methods, the distinct feature of this control design method 
inspired by ecological principles is its problem formulation in which the robustness measure 
appears explicitly in the design methodology. 

4.3.1 Problem formulation 

The problem formulation for this novel control design method is as follows: 
For a given linear system 

x(t) = Ax(t) + Bu(t) (21) 

design a full-state feedback controller 

u = Gx (22) 



60 



Robust Control, Theory and Applications 



where the closed loop system 



- B C = A 

nxm mxn clnx 



(23) 



possesses a desired robustness bound ]i (there is no restriction on the value this bound can 

assume). 

Since eigenvalue distribution, condition number (normality) and robust stability properties 

have established the superiority of target matrices, they become an obvious choice for the 

closed loop system matrix A c i . Note that the desired bound \i= \i& = \i v . Therefore, the robust 

control design method proposed in the next section addresses the three viewpoints of robust 

stability simultaneously^ 



4.3.2 Robust control design algorithm 

Consider the LTI system 

x = Ax + Bu 
Then, for a full-state feedback controller, the closed loop system matrix is given by 



A, 



nxm mxn clnxn\ t) 



Let A rl -A = A„ 



(24) 



The control design method is classified as follows: 

1. Determination of Existence of the Controller [38] 

2. Determination of Appropriate Closed loop System [38] 

3. Determination of Control Gain Matrix[48] 

Following example illustrates this simple and straightforward control design method. 

Application: Satellite formation flying control problem 

The above control algorithm is now illustrated for the application discussed in [32], [33] and 

[49]. 



X 




X 






= 


y 




_y_ 









3co 2 



where x,x,y and y are the state variables, T x and T y are the control variables. 
For example, when a = 1, the system becomes 



1 0" 


X 




"0 0" 




1 
2co 


X 

y 


+ 




1 


± x 

T 

V 


2co 


_y_ 




1 





(25) 









1 


0" 




"0 0" 














1 

2 


and B = 




1 





3 


-2 







1 



Clearly, the first two rows of A c \ cannot be altered and hence a target matrix with all non- 
zero elements cannot be achieved. Therefore, a controller such that the closed loop system 
has as many features of a target SS matrix as possible is designed as given below. 



Robust Stability and Control of Linear Interval Parameter Systems 

Using Quantitative (State Space) and Qualitative (Ecological) Perspectives 



61 



Accordingly, an ecological sign stable closed loop system is chosen such that 

i The closed loop matrix has as many pure predator-prey links as possible. 

ii It also has as many negative diagonal elements as possible. 

Taking the above points into consideration, the following sign pattern is chosen which is 

appropriate for the given A and B matrices: 



+ 0" 




+ 
- - + 


Ai = 


- - - 





1 



-1 -1 

-1 -2 



The magnitudes of the entries of the above sign matrix are decided by the stability 

robustness analysis theorem discussed previously i.e., 

i All non-zero an are identical. 

ii aij = - Oji for all non-zero a^ else a^ = a^ = 

Hence, all the pure predator-prey links are of equal interaction strengths and the non-zero 

diagonal elements have identical self -regulatory intensities. Using the algorithm given 

above, the gain matrix is computed as shown below. 

From the algorithm, 



-1.0 





-4.0 



-1.0 





-1.0 



The closed loop matrix A c i (= A+BG es ) is sign-stable and hence can tolerate any amount of 
variation in the magnitudes of the elements with the sign pattern kept constant. 
In this application, it is clear that all non-zero elements in the open loop matrix (excluding 
elements A13 and A24 since they are dummy states used to transform the system into a set of 
first order differential equations) are functions of the angular velocity co. Hence, real life 
perturbations in this system occur only due to variation in angular velocity co. Therefore, a 
perturbed satellite system is simply an A matrix generated by a different co. This means that 
not every randomly chosen matrix represents a physically perturbed system and that for 
practical purposes, stability of the matrices generated as mentioned above (by varying co) is 
sufficient to establish the robustness of the closed loop system. It is only because of the 
ecological perspective that these structural features of the system are brought to light. Also, 
it is the application of these ecological principles that makes the control design for satellite 
formation flying this simple and insightful. 

Ideally, we would like A t to be the eventual closed loop system matrix. However, it may be 
difficult to achieve this objective for any given controllable pair (A,B). Therefore, we propose 
to achieve a closed loop system matrix that is close to A t . Thus the closed loop system is 
expressed as 



A d =A + BG = A t + AA 



(26) 



Noting that ideally we like to aim for AA = 0, we impose this condition. Then, A c \ = A t 

A+BG. 

i. When B is square and invertible: As given previously, 



A cl =A t and G = B~ 1 (A-A t ) 



62 



Robust Control, Theory and Applications 



ii. When B is not square, but has full rank: 
Consider Bt, the pseudo inverse of B 

where, for B nxm , if n > m, Bt = {b t bJ B T 

Then G= Bt(A-A f ) 

Because of errors associated with pseudo inverse operation, the expression for the closed 
loop system is as follows [34]: 

A t +AE = A + BG 

A t +AE = A + B^bJ 1 B T (A t - A) (27) 

Let B{B T B) 1 B T =B aug 

ThenzlE = (A-A t ) + B^(A t -A) = -(A t -A) + B aMg (A t -A) = (B^-j)(A t -A) 

:.AE = (B mg -l)(A-A t ) (28) 

which should be as small as possible. Therefore, the aim is to minimize the norm of AE. 

Thus, for a given controllable pair {A,B), we use the elements of the desired closed loop 

matrix A t as design variables to minimize the norm of AE. 

We now apply this control design method to aircraft longitudinal dynamics problem. 

Application: Aircraft flight control 

Consider the following short period mode of the longitudinal dynamics of an aircraft [50]. 



-0.334 

-2.52 



1 
-0.387 



-0.027 
-2.6 



(29) 





Open loop A 


Target matrix At 


Close loop A c i 


Matrix 




"-0.334 1 
-2.52 -0.387 






" -0.3181 1.00073" 
-1.00073 -0.3181 






" -0.3182 1.00073" 
-1.00073 -0.319 




Eigenvalues 




-0.3605 + 7 1.5872 
-0.3605 -;1.5872 






"-0.3181 + /1.00073" 
-0.3181-/1.00073 






-0.31816 + 7I. 000722 
-0.31816 -7I. 000722 




Norm bound 


0.2079 


0.3181 


0.3181426 



The open loop matrix properties are as follows: 

Note that the open loop system matrix is stable and has a Lyapunov based robustness 

bound flop = 0.2079. 

Now for the above controllable pair {A,B), we proceed with the proposed control design 

procedure discussed before, with the target PS matrix A t elements as design variables, which 

very quickly yields the following results: 

A t is calculated by minimizing the norm of a max (AE). 



Robust Stability and Control of Linear Interval Parameter Systems 

Using Quantitative (State Space) and Qualitative (Ecological) Perspectives 63 

Here cr max (zlE) = 1.2381 xlCT 4 

For this value, following are the properties of the target matrix. 

From the expression for G, we get 

G = [-0.5843 -0.0265] 

With this controller, the closed loop matrix A c i is determined. 

It is easy to observe that the eventual closed loop system matrix is extremely close to the 

target PS matrix (since a max (AE) «0) and hence the resulting robustness bounds can be 

simply read off from the diagonal elements of the target SS matrix, which in this example is 

also equal to the eventual closed loop system matrix. As expected, this robustness measure 

of the closed loop system is appreciably greater than the robustness measure of the open 

loop system. 

This robust controller methodology thus promises to be a desirable alternative to the other 

robustness based controllers encompassing many fields of application. 

5. Conclusions and future directions 

In this book chapter, robust control theory is presented essentially from a state space 
perspective. We presented the material in two distinct parts. In the first part of the chapter, 
robust control theory is presented from a quantitative (engineering) perspective, making 
extensive use of state space models of dynamic systems. Both robust stability analysis as 
well as control design were addressed and elaborated. Robust stability analysis involved 
studying and quantifying the tolerable bounds for maintaining the stability of a nominally 
stable dynamic system. Robust control design dealt with the issue of synthesizing a 
controller to keep the closed loop systems stable under the presence of a given set of 
perturbations. This chapter focused on characterizing the perturbations essentially as v real 
parameter 7 perturbations and all the techniques presented accommodate this particular 
modeling error. In the second part of the chapter, robustness is treated from a completely 
new perspective, namely from concepts of Population (Community) Ecology, thereby 
emphasizing the v qualitative' nature of the stability robustness problem. In this connection, 
the analysis and design aspects were directed towards studying the role of Signs' of the 
elements of the state space matrices in maintaining the stability of the dynamic system. Thus 
the concept of 'sign stability 7 from the field of ecology was brought out to the engineering 
community. This concept is relatively new to the engineering community. The analysis and 
control design for engineering systems using ecological principles as presented in this 
chapter is deemed to spur exciting new research in this area and provide new directions for 
future research. In particular, the role of v interactions and interconnections' in engineering 
dynamic systems is shown to be of paramount importance in imparting robustness to the 
system and more research is clearly needed to take full advantage of these promising ideas. 
This research is deemed to pave the way for fruitful collaboration between population 
(community) ecologists and control systems engineers. 

6. References 

[1] Dorato, P., "Robust Control", IEEE Press, New York, N.Y., 1987 

[2] Dorato, P., and Yedavalli, R. K., (Eds) Recent Advances in Robust Control, IEEE Press, 
1991, pp. 109-111. 



64 Robust Control, Theory and Applications 

[3] Skelton, R. "Dynamic Systems Control, " John Wiley and Sons, New York, 1988 

[4] Barmish, B. R., "New Tools for Robustness of Linear Systems", Macmillan Publishing 

Company, New York, 1994 
[5] Bhattacharya, S. P., Chapellat, H., and Keel, L. H., "Robust Control: The Parameteric 

Approach", Prentice Hall, 1995 
[6] Yedavalli, R. K., "Robust Control of Uncertain Dynamic Systems: A Linear State Space 

Approach", Springer, 2011 (to be published). 
[7] Patel, R.V. and Toda, M., "Quantitative Measures of Robustness for Multivariable 

Systems," Proceedings of Joint Automatic Control Conference, TP8-A, 1980. 
[8] Yedavalli, R.K., Banda, S.S., and Ridgely, D.B., "Time Domain Stability Robustness 

Measures for Linear Regulators," AIAA Journal of Guidance, Control and 

Dynamics, pp. 520-525, July- August 1985. 
[9] Yedavalli, R.K. and Liang, Z., "Reduced Conservation in Stability Robustness Bounds by 

State Transformation," IEEE Transactions on Automatic Control, Vol. AC-31, pp. 

863-866, September 1986. 
[10] Hyland, D.C., and Bernstein, D.S., "The Majorant Lyapunov Equations: A Nonnegative 

Matrix Equation for Robust Stability and Performance of Large Scale Systems," 

IEEE Transactions on Automatic Control, Vol. AC-32, pp. 1005-1013, November 

1987. 
[11] Yedavalli, R.K., "Improved Measures of Stability-Robustness for Linear State Space 

Models," IEEE Transactions on Automatic Control, Vol. AC-30, pp. 577-579, June 

1985. 
[12] Yedavalli, R.K., "Perturbation Bounds for Robust Stability in Linear State Space 

Models," International Journal of Control, Vol. 42, No. 6, pp. 1507-1517, 1985. 
[13] Qiu, L. and Davison, E.J., "New Perturbation Bounds for the Robust Stability of Linear 

State Space Models," Proceedings of the 25th Conference on Decision and Control, 

Athens, Greece, pp. 751-755, 1986. 
[14] Hinrichsen, D. and Pritchard, A. J., "Stability Radius for Structured Perturbations and 

the Algebraic Riccati Equation", Systems and Control Letters, Vol. 8, pp: 105-113, 

198. 
[15] Zhou, K. and Khargonekar, P., "Stability Robustness Bounds for Linear State Space 

models with Structured Uncertainty," IEEE Transactions on Automatic Control, 

Vol. AC-32, pp. 621-623, July 1987. 
[16] Tesi, A. and Vicino, A., "Robustness Analysis for Uncertain Dynamical Systems with 

Structured Perturbations," Proceedings of the International Workshop on 

'Robustness in Identification and Control,' Torino, Italy, June 1988. 
[17] Vinkler, A. and Wood, L. J., "A comparison of General techniques for designing 

Controllers of Uncertain Dynamics Systems", Proceedings of the Conference on 

Decision and Control, pp: 31-39, San Diego, 1979 
[18] Chang, S. S. L. and Peng, T. K. C, "Adaptive Guaranteed Cost Control of Systems with 

Uncertain Parameters," IEEE Transactions on Automatic Control, Vol. AC-17, Aug. 

1972 
[19] Thorp, J. S., and Barmish B. R., "On Guaranteed Stability of Uncertain Linear Systems 

via Linear Control", Journal of Optimization Theory and Applications, pp: 559-579, 

December 1981. 



Robust Stability and Control of Linear Interval Parameter Systems 

Using Quantitative (State Space) and Qualitative (Ecological) Perspectives 65 



po; 

[21 
[22 
[23; 

[24 
[25; 

[26; 

[27 
[28; 

[29; 

[30 
[31 

[32 

[33 

[34; 
[35; 
[36; 
[37 



Hollot, C. V. and Barmish, B. R., " Optimal Quadratic Stabilizability of Uncertain Linear 

Systems" Proceedings of 18 th Allerton Conference on Communication, Control and 

Computing, University of Illinois, 1980. 
Schmitendorf, W. E., " f A design methodology for Robust Stabilizing Controllers", AIAA 

Guidance and Control Conference, West Virginia, 1986 
Ackermann, J., "Multimodel Approaches to Robust Control Systems Design", IF AC 

Workshop on Model Error Concepts and Compensations, Boston, 1985. 
Yedavalli, R. K., "Time Domain Control Design for Robust Stability of Linear 

Regulators: Applications to Aircraft Control", Proceedings of the American control 

Conference, 914-919, Boston 1985. 
Yedavalli, R. K., "Linear Control design for Guaranteed Stability of Uncertain Linear 

Systems", Proceedings of the American control Conference, 990-992, Seattle, 1986. 
Leah Edelstein-Keshet., Mathematical models in Biology, McGraw Hill, 1988. pp. 234-236. 
Hofbauer, J., and Sigmund, K., "Growth Rates and Ecological Models: ABC on ODE," 

The Theory of Evolutions and Dynamical Systems, Cambridge University Press, 

London, 1988, pp. 29-59. 
Dambacher, J. M., Luh, H-K., Li, H. W., and Rossignol, P. A., "Qualitative Stability and 

Ambiguity in Model Ecosystems," The American Naturalist, Vol. 161, No. 6, June 

2003, pp. 876-888. 
Logofet, D., Matrices and Graphs: Stability Problems in Mathematical Ecology, CRC Press, 

Boca Raton, 1992R, pp. 1-35. 
Quirk, J., and Ruppert. R., "Qualitative Economics and the Stability of Equilibrium," 

Reviews in Economic Studies, 32, 1965, pp. 311-326. 
May. R., Stability and Complexity in Model Ecosystems, Princeton University Press, 

Princeton, N.J., 1973, pp. 16-34. 
Jeffries, C, "Qualitative Stability and Digraphs in Model Ecosystems," Ecology, Vol. 55, 

1974, pp. 1415-1419. 
Yedavalli, R. K., "Qualitative Stability Concept from Ecology and its Use in the Robust 

Control of Engineering Systems," Proceedings of the American Control Conference, 

Minneapolis, June 2006, pp. 5097-5102. 
Yedavalli, R. K., "New Robust and Non-Fragile Control Design Method for Linear 

Uncertain Systems Using Ecological Sign Stability Approach," Proceedings of the 

International Conference on Advances in Control and Optimisation of Dynamical Systems. 

(ACODS07), Bangalore, India, 2007, pp. 267-273. 
Devarakonda, N. and Yedavalli, R. K., "A New Robust Control design for linear 

systems with norm bounded time varying real parameter uncertainty", Proceedings 

ofASME Dynamic Systems and Controls Conference, Boston, MA, 2010. 
Devarakonda, N. and Yedavalli, R. K., "A New Robust Control design for linear 

systems with norm bounded time varying real parameter uncertainty", IEEE 

Conference on Decision and Control, Atlanta, GA, 2010. 
Yedavalli, R. K., "Robust Control Design for Linear Systems Using an Ecological Sign 

Stability Approach," AIAA Journal of Guidance, Control and Dynamics, Vol. 32, No. 1, 

Jan-Feb, 2009, pp 348-352. 
Yedavalli, R. K. and Devarakonda, N. "Sign Stability Concept of Ecology for Control 

Design with Aerospace Applications", AIAA Journal of Guidance Control and 

Dynamics, Vol. 33, No. 2, pp. 333-346, 2010. 



66 Robust Control, Theory and Applications 



[38 
[39 
[40 

[41 

[42 

[43 
[44 
[45; 

[46; 

[47 
[48; 

[49; 

[50 



Devarakonda, N. and Yedavalli, R. K., " Engineering Perspective of Ecological Sign 

Stability and its Application in Control Design", Proceedings of American Control 

Conference, Baltimore, MD, July, 2010. 
Yedavalli, R. K., " Flight Control Application of New Stability Robustness Bounds for 

Linear Uncertain Systems/' Journal of Guidance, Control and Dynamics, Vol. 16, No. 6, 

Nov.-Dec. 1993. 
Yedavalli, R. K., " Robust Stability of Linear Interval Parameter Matrix Family Problem 

Revisited with Accurate Representation and Solution, " Proceedings of American 

Automatic Control Conference, June, 2009, pp 3710-3717 
Yedavalli, R. K., 1989. "Robust Control Design for Aerospace Applications". IEEE 

Transactions on Aerospace and Electronic Systems, 25(3), pp. 314 
Keel, L. H., Bhattacharya, S. P., and Howze, J. W., 1988. "Robust Control with 

Structured Perturbations", IEEE Transactions on Automatic Control, AC-33(1), pp. 

68 
Zhou, K., and P. Khargonekar, 1988. "Robust Stabilization of Linear Systems with Norm 

Bounded Time Varying Uncertainty", Systems and Control Letters, 8, pp. 17. 
Petersen, I, R., and Hollot, C. V., 1986. "A Riccati Equation Based Approach to the 

Stabilization of Uncertain Linear Systems", Automatica, 22(4), pp. 397. 
Petersen, I, R., 1985. "A Riccati Equation Approach to the Design of Stabilizing 

Controllers and Observers for a Class of Uncertain Linear Systems", IEEE 

Transactions AC-30(9) 
Barmish, B. R., Corless, M., and Leitmann, G., 1983. "A New Class of Stabilizing 

Controllers for Uncertain Dynamical Systems", SI AM J. Optimal Control, 21, 

pp.246-255. 
Barmish, B. R., Petersen, I. R., and Feuer, A., 1983. "Linear Ultimate Boundedness 

Control for Uncertain Dynamical Systems", Automatica, 19, pp. 523-532. 
Yedavalli, R. K., and Devarakonda, N., "Ecological Sign Stability and its Use in Robust 

Control Design for Aerospace Applications," Proceedings of IEEE Conference on 

Control Applications, Sept. 2008. 
Yedavalli, R. K., and Sparks. A., "Satellite Formation Flying and Control Design Based 

on Hybrid Control System Stability Analysis," Proceedings of the American 

Control Conference, June 2000, pp. 2210. 
Nelson, R., Flight Stability and Automatic Control. McGraw Hill. Chap. 1998. 



Part 2 
H-infinity Control 



Robust Hoc PID Controller Design Via 

LMI Solution of Dissipative Integral 

Backstepping with State Feedback Synthesis 



Endra Joelianto 

Bandung Institute of Technology 
Indonesia 



1. Introduction 



PID controller has been extensively used in industries since 1940s and still the most often 
implemented controller today. The PID controller can be found in many application areas: 
petroleum processing, steam generation, polymer processing, chemical industries, robotics, 
unmanned aerial vehicles (UAVs) and many more. The algorithm of PID controller is a 
simple, single equation relating proportional, integral and derivative parameters. 
Nonetheless, these provide good control performance for many different processes. This 
flexibility is achieved through three adjustable parameters of which values can be selected to 
modify the behaviour of the closed loop system. A convenient feature of the PID controller 
is its compatibility with enhancement that provides higher capabilities with the same basic 
algorithm. Therefore the performance of a basic PID controller can be improved through 
judicious selection of these three values. 

Many tuning methods are available in the literature, among with the most popular 
method the Ziegler-Nichols (Z-N) method proposed in 1942 (Ziegler & Nichols, 1942). A 
drawback of many of those tuning rules is that such rules do not consider load 
disturbance, model uncertainty, measurement noise, and set-point response 
simultaneously. In general, a tuning for high performance control is always accompanied 
by low robustness (Shinskey, 1996). Difficulties arise when the plant dynamics are 
complex and poorly modeled or, specifications are particularly stringent. Even if a 
solution is eventually found, the process is likely to be expensive in terms of design time. 
Varieties of new methods have been proposed to improve the PID controller design, such 
as analytical tuning (Boyd & Barrat, 1991; Hwang & Chang, 1987), optimization based 
(Wong & Seborg, 1988; Boyd & Barrat, 1991; Astrom & Hagglund, 1995), gain and phase 
margin (Astrom & Hagglund, 1995; Fung et al., 1998). Further improvement of the PID 
controller is sought by applying advanced control designs (Ge et al., 2002; Hara et al., 
2006; Wang et al., 2007; Goncalves et al, 2008). 

In order to design with robust control theory, the PID controller is expressed as a state 
feedback control law problem that can then be solved by using any full state feedback 
robust control synthesis, such as Guaranteed Cost Design using Quadratic Bound (Petersen 
et al, 2000), Hoo synthesis (Green & Limebeer, 1995; Zhou & Doyle, 1998), Quadratic 
Dissipative Linear Systems (Yuliar et al., 1997) and so forth. The PID parameters selection by 



70 Robust Control, Theory and Applications 

transforming into state feedback using linear quadratic method was first proposed by 
Williamson and Moore in (Williamson & Moore, 1971). Preliminary applications were 
investigated in (Joelianto & Tomy, 2003) followed the work in (Joelianto et al., 2008) by 
extending the method in (Williamson & Moore, 1971) to Hoc synthesis with dissipative 
integral backstepping. Results showed that the robust Hoc PID controllers produce good 
tracking responses without overshoot, good load disturbance responses, and minimize the 
effect of plant uncertainties caused by non-linearity of the controlled systems. 
Although any robust control designs can be implemented, in this paper, the investigation is 
focused on the theory of parameter selection of the PID controller based on the solution of 
robust Hoo which is extended with full state dissipative control synthesis and integral 
backstepping method using an algebraic Riccati inequality (ARI). This paper also provides 
detailed derivations and improved conditions stated in the previous paper (Joelianto & 
Tomy, 2003) and (Joelianto et al., 2008). In this case, the selection is made via control system 
optimization in robust control design by using linear matrix inequality (LMI) (Boyd et al., 
1994; Gahinet & Apkarian, 1994). LMI is a convex optimization problem which offers a 
numerically tractable solution to deal with control problems that may have no analytical 
solution. Hence, reducing a control design problem to an LMI can be considered as a 
practical solution to this problem (Boyd et al., 1994). Solving robust control problems by 
reducing to LMI problems has become a widely accepted technique (Balakrishnan & Wang, 
2000). General multi objectives control problems, such as H2 and Hoo performance, peak to 
peak gain, passivity, regional pole placement and robust regulation are notoriously difficult, 
but these can be solved by formulating the problems into linear matrix inequalities (LMIs) 
(Boyd et al, 1994; Scherer et al, 1997)). 

The objective of this paper is to propose a parameter selection technique of PID controller 
within the framework of robust control theory with linear matrix inequalities. This is an 
alternative method to optimize the adjustment of a PID controller to achieve the 
performance limits and to determine the existence of satisfactory controllers by only using 
two design parameters instead of three well known parameters in the PID controller. By 
using optimization method, an absolute scale of merits subject to any designs can be 
measured. The advantage of the proposed technique is implementing an output feedback 
control (PID controller) by taking the simplicity in the full state feedback design. The 
proposed technique can be applied either to a single-input-single-output (SISO) or to a 
multi-inputs-multi-outputs (MIMO) PID controller. 

The paper is organised as follows. Section 2 describes the formulation of the PID controller 
in the full state feedback representation. In section 3, the synthesis of Hoo dissipative integral 
backstepping is applied to the PID controller using two design parameters. This section also 
provides a derivation of the algebraic Riccati inequality (ARI) formulation for the robust 
control from the dissipative integral backstepping synthesis. Section 4 illustrates an 
application of the robust PID controller for time delay uncertainties compensation in a 
network control system problem. Section 5 provides some conclusions. 

2. State feedback representation of PID controller 

In order to design with robust control theory, the PID controller is expressed as a full state 
feedback control law. Consider a single input single output linear time invariant plant 
described by the linear differential equation 



Robust H. PID Controller Design Via LMI Solution of 

Dissipative Integral Backstepping with State Feedback Synthesis 71 

x(t) = Ax(t) + B 2 u(t) 

y(t) = C 2 x(t) ' 

with some uncertainties in the plant which will be explained later. Here, the states x <=R n 
are the solution of (1), the control signal ueR 1 is assumed to be the output of a PID 
controller with input y e R 1 . The PID controller for regulator problem is of the form 

u(t) = K,\y{t)d{t) + K 2 y(t) + K 3 ±y(t) (2) 

which is an output feedback control system and K 1 =K p / T z , K 2 =K , K 3 = K T d of which 
K , T i and T d denote proportional gain, time integral and time derivative of the well 
known PID controller respectively. The structure in equation (2) is known as the standard 
PID controller (Astrom & Hagglund, 1995). 

The control law (2) is expressed as a state feedback law using (1) by differentiating the plant 
output y as follows 

y = C 2 x 

y = C 2 Ax + C 2 B 2 u 

y = C 2 A 2 x + C 2 AB 2 u + C 2 B 2 u 

This means that the derivative of the control signal (2) may be written as 

(1 - K 3 C 2 B 2 )u - (K 3 C 2 A 2 + K 2 C 2 A + K a C 2 )x - (K 3 C 2 AB 2 + K 2 C 2 B 2 )u = (3) 

Using the notation K as a normalization of K , this can be written in more compact form 

K = [K a K 2 K 3 ]=(l-K 3 C 2 B 2 y 1 [K 1 K 2 K 3 ] (4) 

or K = cK where c is a scalar. This control law is then given by 

u = K[Cl A T Cl (A 2 ) T Cjfx+ K[0 B T 2 C T 2 B T 2 A T C T 2 fu (5) 

Denote K x = K[C\ A T C T 2 (A 2 ) T Cj] T and K u = K[0 B\c\ BjA T Cj] T , the block diagram 
of the control law (5) is shown in Fig. 1. In the state feedback representation, it can be seen 
that the PID controller has interesting features. It has state feedback in the upper loop and 
pure integrator backstepping in the lower loop. By means of the internal model principle 
(IMP) (Francis & Wonham, 1976; Joelianto & Williamson, 2009), the integrator also 
guarantees that the PID controller will give zero tracking error for a step reference signal. 
Equation (5) represents an output feedback law with constrained state feedback. That is, the 
control signal (2) may be written as 

u a = K a x a (6) 

where 

x 

U a = U > X a = 



72 



Robust Control, Theory and Applications 



:[[c 2 T 



K a = K\[C T 2 A T C T 2 (A 2 ) T C T 2 ] T 



[0 B T 2 C\ BjA T Cjf] 



Arranging the equation and eliminating the transpose lead to 



K=K 



C 2 

C 2 A C 2 B 

\^ 2 /\ \^- 2 /\d 2 



--KT 



The augmented system equation is obtained from (1) and (7) as follows 



^a= A a X a +B a U a 



(7) 



(8) 



where 



A n 



A B 2 




;B fl 




Fig. 1. Block diagram of state space representation of PID controller 

Equation (6), (7) and (8) show that the PID controller can be viewed as a state variable 
feedback law for the original system augmented with an integrator at its input. The 
augmented formulation also shows the same structure known as the integral backstepping 
method (Krstic et al., 1995) with one pure integrator. Hence, the selection of the parameters 
of the PID controller (6) via full state feedback gain is inherently an integral backstepping 
control problems. The problem of the parameters selection of the PID controller becomes an 
optimal problem once a performance index of the augmented system (8) is defined. The 
parameters of the PID controller are then obtained by solving equation (7) that requires the 
inversion of the matrix r . Since r is, in general, not a square matrix, a numerical method 
should be used to obtain the inverse. 



Robust hL PID Controller Design Via LMI Solution of 

Dissipative Integral Backstepping with State Feedback Synthesis 73_ 

For the sake of simplicity, the problem has been set-up in a single-input-single-output 
(SISO) case. The extension of the method to a multi-inputs-multi-outputs (MIMO) case is 
straighforward. In MIMO PID controller, the control signal has dimension m , ueR m is 
assumed to be the output of a PID controller with input has dimension p , y e R v . The 
parameters of the PID controller K lf K 2 , and K 3 will be square matrices with appropriate 
dimension. 

3. Hoc dissipative integral backstepping synthesis 

The backstepping method developed by (Krstic et al., 1995) has received considerable 
attention and has become a well known method for control system designs in the last 
decade. The backstepping design is a recursive algorithm that steps back toward the control 
input by means of integrations. In nonlinear control system designs, backstepping can be 
used to force a nonlinear system to behave like a linear system in a new set of coordinates 
with flexibility to avoid cancellation of useful nonlinearities and to focus on the objectives of 
stabilization and tracking. Here, the paper combines the advantage of the backstepping 
method, dissipative control and Hoc optimal control for the case of parameters selection of 
the PID controller to develop a new robust PID controller design. 

Consider the single input single output linear time invariant plant in standard form used in 
Hoc performance by the state space equation 

x(t) = Ax(t) + B^t) + B 2 u(t), x(0) = x 

z(t) = C a x(0 + D n w(t) + D 12 u(t) (9) 

y(t) = C 2 x(t) + D 21 w(t) + D 22 u(t) 

where x <=R n denotes the state vector, ueR 1 is the control input, w <=R P is an external 

input and represents driving signals that generate reference signals, disturbances, and 

measurement noise, y eR 1 is the plant output, and zeR m is a vector of output signals 

related to the performance of the control system. 

Definition 1. 

A system is dissipative (Yuliar et al., 1998) with respect to supply rate r(z(t),w(t)) for each 

initial condition x if there exists a storage function V , V : R n — » R + satisfies the inequality 

h 
V(x(t )) + J r(z(t), w(t))dt > V(*(* a )) , V(t a ,t )eR + ,x eR n (10) 

to 

and t < t x and all trajectories (x,y,z) which satisfies (9). 

The supply rate function r(z(t),w(t)) should be interpreted as the supply delivered to the 

h 
system. If in the interval [£ ^i] the integral f r(z(t),w(t))dt is positive then work has been 

done to the system. Otherwise work is done by the system. The supply rate determines not 
only the dissipativity of the system but also the required performance index of the control 
system. The storage function V generalizes the notion of an energy function for a dissipative 
system. The function characterizes the change of internal storage V(x(t 1 ))-V(x(t )) in any 
interval [t ,ti]/ and ensures that the change will never exceed the amount of the supply into 



74 



Robust Control, Theory and Applications 



the system. The dissipative method provides a unifying tool as index performances of 

control systems can be expressed in a general supply rate by selecting values of the supply 

rate parameters. 

The general quadratic supply rate function (Hill & Moylan, 1977) is given by the following 

equation 



r(z,w) = —(w T Qw + 2w T Sz + z T Rz) 



(11) 



where Q and R are symmetric matrices and 



Q(x) = Q + SD n (x) 



-D T n (x)S T - 



D^RD^x) 



such that Q(x)>0 for xeR n and R<0 and inf^ M {a min (Q(x))} = k > . This general 
supply rate represents general problems in control system designs by proper selection of 
matrices Q, R and S (Hill & Moylan, 1977; Yuliar et al, 1997): finite gain (H*,) 
performance ( Q = y 2 I , S = and R = -I); passivity ( Q = R = and S = I ); and mixed Hoc- 
positive real performance ( Q = 6y 2 I , R = -QI and S = (1-6)1 for 6e[0,l]). 
For the PID control problem in the robust control framework, the plant ( E ) is given by the 
state space equation 



E = 



x(t) = Ax(t) + B 1 w(f) + B 2 u(t),x(0) = x 
z(t) = 



(12) 



K D 12 u(t), 
with D n = and y > with the quadratic supply rate function for Hoc performance 

r(z,w) = —(y 2 w T w - z T z) 
Next the plant ( E ) is added with integral backstepping on the control input as follows 

x(t) = Ax(t) + B x w{t) + B 2 u(t) 
u a (t) = u(t) 

C a x(0 



(13) 



(14) 



z(t) = 



D 12 u(t) 



where p is the parameter of the integral backstepping which act on the derivative of the 
control signal u(t) . In equation (14), the parameter p > is a tuning parameter of the PID 
controller in the state space representation to determine the rate of the control signal. Note 
that the standard PID controller in the state feedback representation in the equations (6), (7) 
and (8) corresponds to the integral backstepping parameter p = 1 . Note that, in this design 
the gains of the PID controller are replaced by two new design parameters namely y and p 
which correspond to the robustness of the closed loop system and the control effort. 
The state space representation of the plant with an integrator backstepping in equation (14) 
can then be written in the augmented form as follows 



Robust hL PID Controller Design Via LMI Solution of 
Dissipative Integral Backstepping with State Feedback Synthesis 



75 



-±{t) 

u(t)_ 


= 


"A B 2 ~ 



~x(t) 

u(t)_ 


+ 





w(t) + 


"0" 
1 



u a (t) 



z(t) = 



C a 
D 19 



x(t) 

u(t) 



"«(') 




The compact form of the augmented plant ( I a ) is given by 

Kit) = A a x a (t) + B w w(t) + B a u a (t);x a (0) = 



^nO 



2(f) = CA(f) + D la ro(f) + D 2 A(f) 



where 





/A B = 


"A 



B 2 " 



^ w = 





/B B = 


"0" 
1 


,c a = 


"Ci 



" 
D 12 


^D 2fl = 


"0" 





















u 


J 




LpJ 



Now consider the full state gain feedback of the form 

««(') = U(f) 



(15) 



(16) 



(17) 



The objective is then to find the gain feedback K a which stabilizes the augmented plant 
( I a ) with respect to the dissipative function V in (10) by a parameter selection of the 
quadratic supply rate (11) for a particular control performance. Fig. 2. shows the system 
description of the augmented system of the plant and the integral backstepping with the 
state feedback control law. 



y 





^a 






























U a ~ K a X a 













w 



Fig. 2. System description of the augmented system 



76 



Robust Control, Theory and Applications 



The following theorem gives the existence condition and the formula of the stabilizing gain 
feedback K a . 
Theorem 2. 

Given y > and p > . If there exists X = X T > of the following Algebraic Riccati 
Inequality 



~A B 2 




~A T 0~ 




( 

p- 2 


ro o~ 


-y" 2 


+ 




X-X 




L° ° J 




Bl 




V 


[u 1 





X 



Then the full state feedback gain 



K n 



bX o 





-p- 2 B fl T X = -p- 2 [0 1]X 



x + 



clc, 







o dLd. 



\iy\i. 



< (18) 



(19) 



leads to | \Z\ | 00 <y 

Proof. 

Consider the standard system (9) with the full state feedback gain 



u(t) = Kx(t) 



and the closed loop system 



x(t) = A u x(t) + B 1 w(t), x(0) = x 
z(t) = C u x(t) + D n w(t) 

where D n = , A u =A + B 2 K , C u =C 1 + D 12 K is strictly dissipative with respect to the 
quadratic supply rate (11) such that the matrix A u is asymptotically stable. This implies that 
the related system 

x(t) = Ax(t) + B 1 w(t), x(0) = x 
z(t) = C x x{t) 

where A = A u - B^SC" , B 1 =B 1 Q~ 1/2 and C 1 =(S T Q~ 1 S -R) 1/2 C U has Hoo norm strictly 
less than 1, which implies there exits a matrix X > solving the following Algebraic Riccati 
Inequality (ARI) (Petersen et al. 1991) 



A T X + XA + XB X B\X + C\C X < 
In terms of the parameter of the original system, this can be written as 

(A U ) T X + XA U + [XB 1 -(C u ) T S T ]Q~ 1 [BlX-SC u ]- (C u ) T RC u <0 
Define the full state feedback gain 



K = -E" 1 ((B 2 - B^SD^ ) T X + D 12J RC a 



(20) 



(21) 



(22) 



By inserting 



Robust hL PID Controller Design Via LMI Solution of 
Dissipative Integral Backstepping with State Feedback Synthesis 



77 



A U =A + B 2 K, C U =C 1 +D 12 K 
S=S + D T 11 R, Q = Q + SD 11 +D T 11 S T +D T 11 RD 11 
R = S T Q~ 1 S-R, E = D T 12 RD 12 
B = B 2 - R&^SDu ,5 = 1- D 12 E _1 D[ 2 £ 
into (21) , completing the squares and removing the gain K give the following ARI 

X(A - BE _1 D[ 2 JRC a - B&SC! ) + ( A - BE^D^R^ - B^S^ )X - 
-X(B£- 1 B T - B^B^X + d[D T RDC 1 < 



(23) 



Using the results (Scherer, 1990), if there exists X > satisfies (23) then K given by (22) is 
stabilizing such that the closed loop system A u = A + B 2 K is asymptotically stable. 
Now consider the augmented plant with integral backstepping in (16). In this case, 
D lfl =[0 Of . Note that E)\ a C a = and D la = . The Hoo performance is satisfied by 
setting the quadratic supply rate (11) as follow: 

S = 0, R = -R = I, E = D T 2a RD 2a =D T 2tt D 2a , B = B a , 
D = 7-D 2fl (D 2 T fl D 2fl )- 1 D 2 T fl 
Inserting the setting to the ARI (23) yields 

X(A a - B (D 2 T fl D 2fl )- 1 D 2 T 7C - B w Q- 1 0C a ) + 
+{A a - B a {p\p 2a r x D\ a lC a - B w Q-'QC a )X - 
-X(B (D 2 T fl D 2fl )- 1 B: - B^B^X + 

+(Cj(I -D 2a (D T 2tt D 2tt y'D T 2a ) T x (I - D 2fl (D 2 T fl D 2fl )- 1 D 2 r fl )C fl ) < 
The equation can then be written in compact form 

XA a + A T a X - X(p" 2 B X - y~ 2 B w Bl )X + C T a C a < 
this gives (18). The full state feedback gain is then found by inserting the setting into (22) 



(24) 



K a = -E- 1 ((B fl - B w Q- 1 SD 2a ) T X- D T 2a RC a ) 



that gives | | 27 1 | 00 <y (Yuliar et al, 1998; Scherer & Weiland, 1999). This completes the 

proof. 

The relation of the ARI solution (8) to the ARE solution is shown in the following. Let the 

transfer function of the plant (9) is represented by 



z(s) 

y( s ). 



P n (s) P 12 (s) 
P 21 (s) P 22 (s) 



w(s) 
u(s) 



and assume the following conditions hold: 
(Al). (A,B 2 ,C 2 ) is stabilizable and detectable 
(A2). D 22 =0 



78 Robust Control, Theory and Applications 

(A3). D 12 has full column rank, D 21 has full row rank 

(A4). P 12 (s) and P 2 i( s ) have no invariant zero on the imaginary axis 

From (Gahinet & Apkarian, 1994), equivalently the Algebraic Riccati Equation (ARE) given 

by the formula 

X( A - BE^D^RQ - B^SQ ) + ( A - BE^D^RC^ - B^SQ )X - (25) 

X(BE _1 B T - B^B^X + cItFrD^ = 
has a (unique) solution X^ > , such that 

A + B 2 K + B^YBlX - S^ + D 12 K)] 

is asymptotically stable. The characterization of feasible y using the Algebraic Riccati 
Inequality (ARI) in (24) and ARE in (25) is immediately where the solution of ARE ( X^ ) and 
ARI ( X ) satisfy < X., < X , X = X^ > (Gahinet & Apkarian, 1994). 
The Algebraic Riccati Inequality (24) by Schur complement implies 



A T tt X + XA a+ C T tt C tt XB a XB W 
B T a X p 2 / 

B T W X -y 2 I 



<0 (26) 



Ther problem is then to find X > such that the LMI given in (26) holds. The LMI problem 
can be solved using the method (Gahinet & Apkarian, 1994) which implies the solution of 
the ARI (18) (Liu & He, 2006). The parameters of the PID controller which are designed by 
using H^ dissipative integral backstepping can then be found by using the following 
algorithm: 

1. Select p>0 

2. Select y > 

3. Find X > by solving LMI in (26) 

4. Find K a using (19) 

5. Find K using (7) 

6. Compute K x , K 2 and K 3 using (4) 

7. Apply in the PID controller (2) 

8. If it is needed to achieve y minimum, repeat step 2 and 3 until y = y min then follows the 
next step 

4. Delay time uncertainties compensation 

Consider the plant given by a first order system with delay time which is common 
assumption in industrial process control and further assume that the delay time 
uncertainties belongs to an a priori known interval 

Y(s) = -?—e- Ls U(s) , I g [a,b] (27) 

TS + 1 

The example is taken from (Joelianto et al., 2008) which represents a problem in industrial 
process control due to the implementation of industrial digital data communication via 



Robust hL PID Controller Design Via LMI Solution of 
Dissipative Integral Backstepping with State Feedback Synthesis 



79 



ethernet networks with fieldbus topology from the controller to the sensor and the actuator 
(Hops et al., 2004; Jones, 2006, Joelianto & Hosana, 2009). In order to write in the state space 
representation, the delay time is approximated by using the first order Pade approximation 



Y(s) = 



-ds + 1 



xs + 1 ds + 1 



U(s), d = L/2 



(28) 



In this case, the values of x and d are assumed as follows: x = 1 s and d nom = 3 s. These pose 
a difficult problem as the ratio between the delay time and the time constant is greater than 
one ( (d I x) > 1 ). The delay time uncertainties are assumed in the interval d e [2,4] . 
The delay time uncertainty is separated from its nominal value by using linear fractional 
transformation (LFT) that shows a feedback connection between the nominal and the 
uncertainty block. 







8 














u e 




y d 




d 





















Fig. 3. Separation of nominal and uncertainty using LFT 
The delay time uncertainties can then be represented as 

d = ad nom + P5 , -1 < 5 < 1 



d = F u 



1 
P a 



After simplification, the delay time uncertainties of any known ranges have a linear 
fractional transformation (LFT) representation as shown in the following figure. 







8 














u e 




y d 




G tot 






w 















Fig. 4. First order system with delay time uncertainty 



80 



Robust Control, Theory and Applications 



The representation of the plant augmented with the uncertainty is 



G tot (s) = 



c r a 



A 


B a 


B 2 1 


Q 


Dn 


D 12 


c 2 


D 21 


D 22 J 



(29) 



The corresponding matrices in (29) are 
A = 



A xll 



xll u x2 

1 



>c x = 



c xll o 

1 



D = 



^u D xl2 











In order to incorporate the integral backstepping design, the plant is then augmented with 
an integrator as follows 



\ = 


~A B 2 ~ 



= 



B m = 



Acii o B xll 
1-10 











roi 











= 





/ 








1 



c = 



c. 



D, 



D, 



The problem is then to find the solution X > and y > of AR1 (18) and to compute the full 
state feedback gain given by 



u a (t) = K a x a (t) = -p- 2 ([0 1]X) 



x(t) 
u(t) 



which is stabilizing and leads to the infinity norm | \E\ | 00 < y . 
The state space representation for the nominal system is given by 



A. 



-1.6667 -0.6667 
1 



, C nom =[-l 0.6667] 



Robust hL PID Controller Design Via LMI Solution of 
Dissipative Integral Backstepping with State Feedback Synthesis 



81 



In this representation, the performance of the closed loop system will be guaranteed for the 
specified delay time range with fast transient response (z). The full state feedback gain of the 
PID controller is given by the following equation 



K, 



K* 



= (l-2C 3 [-l 0.6667]) 





M 


1 







K 2 




W 



For different y , the PID parameters and transient performances, such as: settling time ( T s ) 
and rise time ( T r ) are calculated by using LMI (26) and presented in Table 1. For different p 
but fixed y , the parameters are shown in Table 2. As comparison, the PID parameters are 
also computed by using the standard FL performance obtained by solving ARE in (25). The 
results are shown Table 3 and Table 4 for different y and different p respectively. It can be 
seen from Table 1 and 2 that there is no clear pattern either in the settling time or the rise 
time. Only Table 1 shows that decreasing y decreases the value of the three parameters. On 
the other hand, the calculation using ARE shows that the settling time and the rise time are 
decreased by reducing y or p . Table 3 shows the same result with the Table 1 when the 
value of y is decreased. 



Y 


P 


K p 


Ki 


K d 


Tr(8) 


T s 5%(s) 


0.1 


1 


0.2111 


0.1768 


0.0695 


10.8 


12.7 


0.248 


1 


0.3023 


0.2226 


0.1102 


8.63 


13.2 


0.997 


1 


0.7744 


0.3136 


0.2944 


4.44 


18.8 


1.27 


1 


10.471 


0.5434 


0.4090 


2.59 


9.27 


1.7 


1 


13.132 


0.746 


0.5191 


1.93 


13.1 



Table 1. Parameters and transient response of PID for different y with LMI 



Y 


P 


K p 


Ki 


K d 


Tr(8) 


T s 5% (s) 


0.997 


0.66 


11.019 


0.1064 


0.3127 


39.8 


122 


0.997 


0.77 


0.9469 


0.2407 


0.3113 


13.5 


39.7 


0.997 


1 


0.7744 


0.3136 


0.2944 


4.44 


18.8 


0.997 


1.24 


0.4855 


0.1369 


0.1886 


21.6 


56.8 


0.997 


1.5 


0.2923 


0.0350 


0.1151 


94.4 


250 



Table 2. Parameters and transient response of PID for different p with LMI 



82 



Robust Control, Theory and Applications 



Y 


P 


K p 


Ki 


K d 


T r (s) 


T s 5% (s) 


0.1 


1 


0.2317 


0.055 


0.1228 


55.1 


143 


0.248 


1 


0.2319 


0.0551 


0.123 


55.0 


141 


0.997 


1 


0.2373 


0.0566 


0.126 


53.8 


138 


1.27 


1 


0.2411 


0.0577 


0.128 


52.6 


135 


1.7 


1 


0.2495 


0.0601 


0.1327 


52.2 


130 



Table 3. Parameters and transient response of PID for different y with ARE 



Y 


P 


K p 


Ki 


K d 


Tr(s) 


T s 5%(s) 


0.997 


0.66 


0.5322 


0.1396 


0.2879 


21.9 


57.6 


0.997 


0.77 


0.4024 


0.1023 


0.2164 


29.7 


77.5 


0.997 


1 


0.2373 


0.0566 


0.126 


39.1 


138 


0.997 


1.24 


0.1480 


0.0332 


0.0777 


91.0 


234 


0.997 


1.5 


0.0959 


0.0202 


0.0498 


150.0 


383 



Table 4. Parameters and transient response of PID for different p with ARE 




-c* 



-1.5 



\C 



2C 



:■!.: 



40 50 £0 

Time (sec) 



7D 



-o 



iC 



1CD 



Fig. 5. Transient response for different y using LMI 



Robust hL PID Controller Design Via LMI Solution of 
Dissipative Integral Backstepping with State Feedback Synthesis 



83 




Fig. 6. Transient response for different p using LMI 




-1 -0.3 -0.6 -0. 



-0.2 0.2 0.4 

Real Axis 



Fig. 7. Nyquist plot y = 0.248 and p = 1 using LMI 



84 



Robust Control, Theory and Applications 





i i i _ - j>^~^"" 


"■---! 1 1 1 














: : 


' *\s 


--* 








: : 


/ / 












/ / 


s 


:- 


. / J 


1 - 




/ 


\ 


» 0.2 


j \ 




3 


\ 


■ 


i " 


\^^~~^^ 


"1 


/^- — J 


1 


E -0.2 


\ ( 


/ 








-0.4 


' \ 1 


/ - 




\ 


/ 


-0.6 


\ \ 


^ 


-03 


**-%^N. 


,,''* 


' 


1 1 1 ^=-%M.:»n, 





-1 -OS -06 -0.4 -0:2 0.2 04 0.6 03 1 

Real A*fc 



Fig. 8. Nyquist plot y = 0.997 and p - 0.66 using LMI 




It 30 90 1W 



Fig. 9. Transient response for different d using LMI 



Robust hL PID Controller Design Via LMI Solution of 
Dissipative Integral Backstepping with State Feedback Synthesis 



85 




SO 100 120 

Time- {sec) 



??.?. 



Fig. 10. Transient response for different bigger d using LMI 

The simulation results are shown in Figure 5 and 6 for LMI, with y and p are denoted by 
g and r respectively in the figure. The LMI method leads to faster transient response 
compared to the ARE method for all values of y and p . Nyquist plots in Figure 7 and 8 
show that the LMI method has small gain margin. In general, it also holds for phase margin 
except at y = 0.997 and p = 1.5 where LMI has bigger phase margin. 

In order to test the robustness to the specified delay time uncertainties, the obtained robust 
PID controller with parameter y =0.1 and p = 1 is tested by perturbing the delay time in the 
range value of d e [1,4] . The results of using LMI are shown in Figure 9 and 10 respectively. 
The LMI method yields faster transient responses where it tends to oscillate at bigger time 
delay. With the same parameters y and p , the PID controller is subjected to bigger delay 
time than the design specification. The LMI method can handle the ratio of delay time and 
time constant L / x < 12 s while the ARE method has bigger ratio L / x < 43 s. In summary, 
simulation results showed that LMI method produced fast transient response of the closed 
loop system with no overshoot and the capability in handling uncertainties. If the range of 
the uncertainties is known, the stability and the performance of the closed loop system will 
be guaranteed. 



5. Conclusion 

The paper has presented a model based method to select the optimum setting of the PID 
controller using robust FL dissipative integral backstepping method with state feedback 
synthesis. The state feedback gain is found by using LMI solution of Algebraic Riccati 
Inequality (ARI). The paper also derives the synthesis of the state feedback gain of robust FL 
dissipative integral backstepping method. The parameters of the PID controller are 



86 Robust Control, Theory and Applications 

calculated by using two new parameters which correspond to the infinity norm and the 
weighting of the control signal of the closed loop system. 

The LMI method will guarantee the stability and the performance of the closed loop system 
if the range of the uncertainties is included in the LFT representation of the model. The LFT 
representation in the design can also be extended to include plant uncertainties, 
multiplicative perturbation, pole clustering, etc. Hence, the problem will be considered as 
multi objectives LMI based robust Hoo PID controller problem. The proposed approach can 
be directly extended for MIMO control problem with MIMO PID controller. 

6. References 

Astrom, K.J. & Hagglund, T. (1995). PID Controllers: Theory, Design, and Tuning, second ed., 

Instrument Society of America, ISBN 1-55617-516-7, Research Triangle Park, North 

Carolina - USA 
Boyd, S.P. & Barrat, C.H. (1991). Linear Controller Design: Limits of Performance, Prentice Hall 

Inc., ISBN 0-1353-8687-X, New Jersey 
Boyd, S.; El Ghaoui, L., Feron, E. & Balakrishnan, V. (1994). Linear Matrix Inequalities in 

System and Control Theory, SIAM Studies 15, ISBN 0-89871-485-0, Philadelphia 
Balakrishnan, V. & Wang, F. (2000). Semidefinite programming in systems and control, In: 

Handbook on Semidefinite Programming, Wolkowics, H; Saigal, R. & Vandenberghe, L. 

(Ed.), pp. 421-441, Kluwer Academic Pub., ISBN 0-7923-7771-0, Boston 
Francis, B.A. & Wonham, W.M. (1976). The internal model principle of control theory, 

Automatica, Vol. 12, pp. 457-465, ISSN 0005-1098 
Fung, H.W.; Wang, Q.G. & Lee, T.H. (1998). PI tuning in terms of gain and phase margins, 

Automatica, Vol. 34, pp. 1145-1149, ISSN 0005-1098 
Gahinet, P. & Apkarian, P. (1994). A linear matrix inequality approach to Hoocontrol, Inter. 

Journal of Robust Nonlinear Control, Vol. 4, pp. 421-448, ISSN 1099-1239 
Ge, M.; Chiu, M.S. & Wang, Q.G. (2002). Robust PID controller design via LMI approach, 

Journal of Process Control, Vol. 12, pp. 3-13, ISSN 0959-1524 
Green, M. & Limebeer, D.J. (1995). Linear Robust Control, Englewood Cliffs, Prentice Hall 

Inc., ISBN 0-13-102278-4, New Jersey 
Goncalves, E.N.; Palhares, R.M. & Takahashi, R.H.C. (2008). A novel approach for H 2 /Hoo 

robust PID synthesis for uncertain systems, Journal of Process Control, Vol. 18, pp. 

19-26, ISSN 0959-1524 
Hara, S.; Iwasaki, T. & Shiokata, D. (2006). Robust PID control using generalized KYP 

synthesis, IEEE Control Systems Magazine, Feb., pp. 80-91, ISSN 0272-1708 
Hill, D.J. & Moylan, P.J. (1977). Stability results for nonlinear feedback systems, Automatica, 

Vol. 13, pp. 377-382, ISSN 0005-1098 
Hops, J.; Swing, B., Phelps, B., Sudweeks, B., Pane, J. & Kinslow, J. (2004). Non-deterministic 

DUT behavior during functional testing of high speed serial busses: challenges and 

solutions, Proceedings International Test Conference, ISBN 0-7803-8581-0, 26-28 Oct. 

2004, IEEE, Charlotte, NC, USA 
Hwang, S.H. & Chang, H.C. (1987). A theoritical examination of closed-loop properties and 

tuning methods of single-loop PI controllers, Chemical Engineering Science, Vol. 42, 

pp. 2395-2415, ISSN 0009-2509 



Robust hL PID Controller Design Via LMI Solution of 

Dissipative Integral Backstepping with State Feedback Synthesis 87 

Joelianto, E. & Tommy. (2003). A robust DC to DC buckboost converter using PID Hoo- 

backstepping controller, Proceedings of Int. Confer, on Power Electronics and Drive 

Systems (PEDS), pp. 591-594, ISBN 0-7803-7885-7, 17-20 Nov. 2003, Singapore 
Joelianto, E.; Sutarto, H.Y. & Wicaksono, A. (2008). Compensation of delay time 

uncertainties industrial control ethernet networks using LMI based robust Hoo PID 

controller, Proceedings of 5 th Int. Confer. Wireless Optical Comm. Networks (WOCN), 

pp. 1-5, ISBN 978-1-4244-1979-1, 5-7 May 2008, Surabaya - Indonesia 
Joelianto, E. & Hosana (2009). Loop-back action latency performance of an industrial data 

communication protocol on a PLC ethernet network, Internetworking Indonesia 

journal, Vol. I, No.l, pp. 11-18, ISSN 1942-9703 
Joelianto, E. & Williamson, D. (2009). Transient response improvement of feedback control 

systems using hybrid reference control, International Journal of Control, Vol. 81, No. 

10, pp. 1955-1970, ISSN 0020-7179 print/ISSN 1366-5820 online 
Jones, M. (2006). Designning for real time embedded ethernet, The Industrial Ethernet Book , 

Vol. 35, pp. 38-41 
Krstic, M.; Kanellakopoulos, I. & Kokotovic, P. (1995). Nonlinear and Adaptive Control Design, 

John Wiley and Sons Inc., ISBN 0-4711-2732-9, USA 
Liu, K.Z. & He, R. (2006). A simple derivation of ARE solutions to the standard Hoo control 

problem based on LMI solution, Systems & Control Letters, Vol. 5 , pp. 487-493, ISSN 

0167-6911 
Petersen, I.R.; Anderson, B.D.O. & Jonkheere, E. (1991). A first principles solution to the non- 
singular Hoo control problem, Inter. Journal of Robust and Nonlinear Control, Vol. 1, 

pp. 171-185, ISSN 1099-1239 
Petersen, I.R.; Ugrinovskii, V.A. & Savkin, A.V. (2000). Robust Control Design using Hoo 

Methods, Springer, ISBN 1-8523-3171-2, London 
Scherer, C. (1990). The Riccati Inequality and State-Space Hcc-Optimal Control, PhD Dissertation, 

Bayerischen Julius Maximilans, Universitat Wurzburg, Wurzburg 
Scherer, C; Gahinet, P. & Chilali, M. (1997). Multiobjective output-feedback control via 

LMI optimization, IEEE Trans, on Automatic Control, Vol. 42, pp. 896-911, ISSN 

0018-9286 
Scherer, C. & Weiland, S. (1999). Linear Matrix Inequalities in Control, Lecture Notes DISC 

Course, version 2.0. http://www.er.ele.tue.nl/SWeiland/lmi99.htm 
Shinskey, F.G. (1996). Process Control Systems: Application, Design and Tuning, fourth ed., 

McGraw-Hill, ISBN 0-0705-7101-5, Boston 
Wang, Q.G.; Lin, C, Ye, Z., Wen, G., He, Y. & Hang, C.C. (2007). A quasi-LMI approach to 

computing stabilizing parameter ranges of multi-loop PID controllers, Journal of 

Process Control, Vol. 17, pp. 59-72, ISSN 0959-1524 
Williamson, D. & Moore, J.B. (1971). Three term controller parameter selection using 

suboptimal regulator theory, IEEE Trans, on Automatic Control, Vol. 16, pp. 82-83, 

ISSN 0018-9286 
Wong, S.K.P. & Seborg, D.E. (1988). Control strategy for single-input-single-output 

nonlinear systems with time delays, International Journal of Control, Vol. 48, pp. 

2303-2327, ISSN 0020-7179 print/ISSN 1366-5820 online 
Yuliar, S.; James, M.R. & Helton, J.W. (1998). Dissipative control systems synthesis with full 

state feedback, Mathematics of Control, Signals, and Systems, Vol. 11, pp. 335-356, 

ISSN 0932-4194 print/ISSN 1435-568X online 



88 Robust Control, Theory and Applications 

Yuliar, S.; Samyudia, Y. & Kadiman, K. (1997). General linear quadratic dissipative output 
feedback control system synthesis, Proceedings of 2 nd Asian Control Confererence 
(ASCC), pp. 659-662, ISBN 89-950038-0-4-94550, Seoul-Korea, ACPA 

Zhou, K. & Doyle, J.C. (1998). Essential of Robust Control, Prentice Hall Inc., ISBN 0-13- 
790874-1, USA 

Ziegler, J.G. & Nichols, N.B. (1942). Optimum setting for automatic controllers, Transactions 
of The ASME, Vol. 64, pp. 759-769 



Robust Hoc Tracking Control of 
Stochastic Innate Immune System Under Noises 

Bor-Sen Chen, Chia-Hung Chang and Yung-Jen Chuang 

National Tsing Hua University 
Taiwan 



1. Introduction 

The innate immune system provides a tactical response, signaling the presence of 'non-self 
organisms and activating B cells to produce antibodies to bind to the intruders 7 epitopic 
sites. The antibodies identify targets for scavenging cells that engulf and consume the 
microbes, reducing them to non-functioning units (Stengel et al., 2002b). The antibodies also 
stimulate the production of cytokines, complement factors and acute-phase response 
proteins that either damage an intruder's plasma membrane directly or trigger the second 
phase of immune response. The innate immune system protects against many extracellular 
bacteria or free viruses found in blood plasma, lymph, tissue fluid, or interstitial space 
between cells, but it cannot clean out microbes that burrow into cells, such as viruses, 
intracellular bacteria, and protozoa (Janeway, 2005; Lydyard et al., 2000; Stengel et al., 
2002b). The innate immune system is a complex system and the obscure relationships 
between the immune system and the environment in which several modulatory stimuli are 
embedded (e.g. antigens, molecules of various origin, physical stimuli, stress stimuli). This 
environment is noisy because of the great amount of such signals. The immune noise has 
therefore at least two components: (a) the internal noise, due to the exchange of a network of 
molecular and cellular signals belonging to the immune system during an immune response 
or in the homeostasis of the immune system. The concept of the internal noise might be 
viewed in biological terms as a status of sub-inflammation required by the immune 
response to occur; (b) the external noise, the set of external signals that target the immune 
system (and hence that add noise to the internal one) during the whole life of an organism. 
For clinical treatment of infection, several available methods focus on killing the invading 
microbes, neutralizing their response, and providing palliative or healing care to other 
organs of the body. Few biological or chemical agents have just one single effect; for 
example, an agent that kills a virus may also damage healthy 'self cells. A critical function 
of drug discovery and development is to identify new compounds that have maximum 
intended efficacy with minimal side effects on the general population. These examples 
include antibiotics as microbe killers; interferons as microbe neutralizers; interleukins, 
antigens from killed (i.e. non-toxic) pathogens, and pre-formed and monoclonal antibodies 
as immunity enhancers (each of very different nature); and anti-inflammatory and anti- 
histamine compounds as palliative drugs (Stengel et al., 2002b). 

Recently, several models of immune response to infection (Asachenkov, 1994; Nowak & 
May, 2000; Perelson & Weisbuch, 1997; Rundell et al., 1995) with emphasis on the human- 



90 Robust Control, Theory and Applications 

immunodeficiency virus have been reported (Nowak et al., 1995; Perelson et al., 1993; 
Perelson et al, 1996; Stafford et al, 2000). Norbert Wiener (Wiener, 1948) and Richard 
Bellman (Bellman, 1983) appreciated and anticipated the application of mathematical 
analysis for treatment in a broad sense, and Swan made surveys on early optimal control 
applications to biomedical problems (Swan, 1981). Kirschner (Kirschner et al., 1997) offers an 
optimal control approach to HIV treatment, and intuitive control approaches are presented 
in (Bonhoeffer et al, 1997; De Boer & Boucher, 1996; Wein et al, 1998; Wodarz & Nowak, 
1999, 2000). 

The dynamics of drug response (pharmacokinetics) are modeled in several works 
(Robinson, 1986; van Rossum et al., 1986) and control theory is applied to drug delivery in 
other studies (Bell & Katusiime, 1980; Carson et al, 1985; Chizeck & Katona, 1985; Gentilini 
et al, 2001; Jelliffe, 1986; Kwong et al, 1995; Parker et al, 1996; Polycarpou & Conway, 1995; 
Schumitzky, 1986). Recently, Stengel (Stengel et al, 2002a) presented a simple model for the 
response of the innate immune system to infection and therapy, reviewed the prior method 
and results of optimization, and introduced a significant extension to the optimal control of 
enhancing the immune response by solving a two-point boundary-value problem via an 
iterative method. Their results show that not only the progression from an initially life- 
threatening state to a controlled or cured condition but also the optimal history of 
therapeutic agents that produces that condition. In their study, the therapeutic method is 
extended by adding linear-optimal feedback control to the nominal optimal solution. 
However, the performance of quadratic optimal control for immune systems may be 
decayed by the continuous exogenous pathogen input, which is considered as an 
environmental disturbance of the immune system. Further, some overshoots may occur in 
the optimal control process and may lead to organ failure because the quadratic optimal 
control only minimizes a quadratic cost function that is only the integration of squares of 
states and allows the existence of overshoot (Zhou et al., 1996). 

Recently, a minimax control scheme of innate immune system is proposed by the dynamic 
game theory approach to treat the robust control with unknown disturbance and initial 
condition (Chen et al., 2008). They consider unknown disturbance and initial condition as a 
player who wants to destroy the immune system and a control scheme as another player to 
protect the innate immune system against the disturbance and uncertain initial condition. 
However, they assume that all state variables are available. It is not the case in practical 
application. 

In this study, a robust Hoo tracking control of immune response is proposed for therapeutic 
enhancement to track a desired immune response under stochastic exogenous pathogen 
input, environmental disturbances and uncertain initial states. Furthermore, the state 
variables may not be all available and the measurement is corrupted by noises too. 
Therefore, a state observer is employed for state estimation before state feedback control of 
stochastic immune systems. Since the statistics of these stochastic factors may be unknown 
or unavailable, the Hoo observer-based control methodology is employed for robust Hoo 
tracking design of stochastic immune systems. In order to attenuate the stochastic effects of 
stochastic factors on the tracking error, their effects should be considered in the stochastic 
Hoo tracking control procedure from the robust design perspective. The effect of all possible 
stochastic factors on the tracking error to a desired immune response, which is generated by 
a desired model, should be controlled below a prescribed level for the enhanced immune 
systems, i.e. the proposed robust Hoo tracking control need to be designed from the 
stochastic Hoo tracking perspective. Since the stochastic innate immune system is highly 



Robust hL Tracking Control of Stochastic Innate Immune System Under Noises 91 

nonlinear, it is not easy to solve the robust observer-based tracking control problem by the 
stochastic nonlinear Hoo tracking method directly. 

Recently, fuzzy systems have been employed to efficiently approximate nonlinear dynamic 
systems to efficiently treat the nonlinear control problem (Chen et aL, 1999,2000; Li et al., 
2004; Lian et aL, 2001). A fuzzy model is proposed to interpolate several linearized 
stochastic immune systems at different operating points to approximate the nonlinear 
stochastic innate immune system via smooth fuzzy membership functions. Then, with the 
help of fuzzy approximation method, a fuzzy Hoo tracking scheme is developed so that the 
Hoo tracking control of stochastic nonlinear immune systems could be easily solved by 
interpolating a set of linear Hoo tracking systems, which can be solved by a constrained 
optimization scheme via the linear matrix inequality (LMI) technique (Boyd, 1994) with the 
help of Robust Control Toolbox in Matlab (Balas et al., 2007). Since the fuzzy dynamic model 
can approximate any nonlinear stochastic dynamic system, the proposed Hoo tracking 
method via fuzzy approximation can be applied to the robust control design of any model of 
nonlinear stochastic immune system that can be T-S fuzzy interpolated. Finally, a 
computational simulation example is given to illustrate the design procedure and to confirm 
the efficiency and efficacy of the proposed Hoo tracking control method for stochastic 
immune systems under external disturbances and measurement noises. 

2. Model of innate immune response 

A simple four-nonlinear, ordinary differential equation for the dynamic model of infectious 
disease is introduced here to describe the rates of change of pathogen, immune cell and 
antibody concentrations and as an indicator of organic health (Asachenkov, 1994; Stengel et 
al., 2002a). In general, the innate immune system is corrupted by environmental noises. 
Further, some state variable cannot be measured directly and the state measurement may be 
corrupted by measurement noises. A more general dynamic model will be given in the 
sequel. 

x x - (a n - a 12 x 3 )x 1 + \u x + w 1 

Xy — Cly-] \X A )t*22 1 3 23 \ 2 2 / "?2 tVy 

x 3 = a 31 x 2 - (a 32 + a 33 x x )x 3 + b 3 u 3 + w 3 

x 4 = a A1 x t - a 42 x 4 + b 4 u 4 + w 4 (1) 

y 1 = c 1 x 2 +n lf y 2 = c 2 x 3 +n 2 ,y 3 = c 3 x A + n 3 
fcos(7ix 4 ), 0<x 4 <0.5 



a ^ ) = \ 0, 0.5<* 4 

where X\ denotes the concentration of a pathogen that expresses a specific foreign antigen; xi 
denotes the concentration of immune cells that are specific to the foreign antigen; X3 denotes 
the concentration of antibodies that bind to the foreign antigen; X4 denotes the characteristic 
of a damaged organ [j4=0: healthy, X4 ^ 1: dead]. The combined therapeutic control agents 
and the exogenous inputs are described as follows: u\ denotes the pathogen killer's agent; U2 
denotes the immune cell enhancer; W3 denotes the antibody enhancer; W4 denotes the organ 
healing factor (or health enhancer); w\ denotes the rate of continuing introduction of 
exogenous pathogens; wi ~ W4 denote the environmental disturbances or unmodeled errors 
and residues; w\ ~ w^ are zero mean white noises, whose covariances are uncertain or 



92 



Robust Control, Theory and Applications 



unavailable; and #21 (#4) is a nonlinear function that describes the mediation of immune cell 
generation by the damaged cell organ. And if there is no antigen, then the immune cell 
maintains the steady equilibrium value of xi. The parameters have been chosen to produce 
a system that recovers naturally from the pathogen infections (without treatment) as a 
function of initial conditions during a period of times. Here, y\, yz, 1/3 are the measurements 
of the corresponding states; c\, ci, C3 are the measurement scales; and rt\, n% H3 are the 
measurement noises. In this study, we assume the measurement of pathogen x\ is 
unavailable. For the benchmark example in (1), both parameters and time units are 
abstractions, as no specific disease is addressed. The state and control are always positive 
because concentrations cannot go below zero, and organ death is indicated when X4 > 1. The 
structural relationship of system variables in (1) is illustrated in Fig. 1. Organ health 
mediates immune cell production, inferring a relationship between immune response and 
fitness of the individual. Antibodies bind to the attacking antigens, thereby killing 
pathogenic microbes directly, activating complement proteins, or triggering an attack by 
phagocytic cells, e.g. macrophages and neutrophils. Each element of the state is subject to an 
independent control, and new microbes may continue to enter the system. In reality, 
however, the concentration of invaded pathogens is hardly to be measured. We assume that 
only the rest of three elements can be measured with measurement noises by medical 
devices or other biological techniques such as an immunofluorescence microscope, which is 
a technique based on the ability of antibodies to recognize and bind to specific molecules. It 
is then possible to detect the number of molecules easily by using a fluorescence microscope 
(Piston, 1999). 




Exogenous 
pathogens Wi 



^•« Environmental 
disturbances w(t) 



n 3 1 — -w*"- _ ► t 
y/Measurement noise n(t) 



Fig. 1. Innate and enhanced immune response to a pathogenic attack under exogenous 
pathogens, environmental disturbances, and measurement noises. 



Robust hL Tracking Control of Stochastic Innate Immune System Under Noises 



93 



Several typical uncontrolled responses to increasing levels of initial pathogen concentration 
under sub-clinical, clinical, chronic, and lethal conditions have been discussed and shown in 
Fig. 2 (Stengel et aL, 2002a). In general, the sub-clinical response would not require medical 
examination, while the clinical case warrants medical consultation but is self-healing 
without intervention. Pathogen concentration stabilizes at non-zero values in the chronic 
case, which is characterized by permanently degraded organ health, and pathogen 
concentration diverges without treatment in the lethal case and kills the organ (Stengel et aL, 
2002b). Finally, a more general disease dynamic model for immune response could be 
represented as 



x(t) = f(x(t)) + g(x(t))u(t) + Dw(t), 

y(t) = c(x(t)) + n(t) 



*(0) = 



(2) 



where x(t) e R nxl is the state vector; u(t) e R is the control agent; w(t) e R includes 
exogenous pathogens, environmental disturbances or model uncertainty. y(t) e R /xl is the 
measurement output; and n(t) e R /xl is the measurement noises. We assume that w(t) and 
n(t) are independent stochastic noises, whose covariances may be uncertain or unavailable. 
All possible nonlinear interactions in the immune system are represented by f(x(t)). 



sub-clinical 



clinical 




Time unit 
chronic 




AAAAAAAAAAA 



« fl - oooooooo 000 o 



4 6 

Time unit 




lethal 




Fig. 2. Native immune responses to attack by different pathogens which are sub-clinical, 
clinical, chronic, and lethal conditions (Stengel et aL, 2002a). 



94 



Robust Control, Theory and Applications 



3. Robust Hoc Therapeutic Control of Stochastic Innate Immune Response 

Our control design purpose for nonlinear stochastic innate immune system in (2) is to 
specify a state feedback control u(t) = k(x(t) - x d (t)) so that the immune system can track the 
desired response Xd(t). Since the state variables are unavailable for feedback tracking control, 
the state variables have to be estimated for feedback tracking control u(t) = k(x(t)-x d (t)) . 
Suppose the following observer-based control with y(t) as input and u(t) as output is 
proposed for robust Hoc tracking control. 



x(t) = f(x(t)) + *(*(*))«(') + Kx(t))(y(t) - c(x(t))) 

u(t) = k(x(t)-x d (t)) 



(3) 



where the observer-gain l(x(t )) is to be specified so that the estimation error e(t) = x(t) - x(t) 
can be as small as possible and control gain k(x(t)-x d (t)) is to be specified so that the 
system states x(t) can come close to the desired state responses Xd(t) from the stochastic point 
of view. 
Consider a reference model of immune system with a desired time response described as 



xAt) = A d x d (t) + r(t) 



(4) 



where x d (t) e R nxl is the reference state vector; A d e R nxn is a specific asymptotically stable 
matrix and r(t) is a desired reference signal. It is assumed that Xd(t), V£ > represents a 
desired immune response for nonlinear stochastic immune system in (2) to follow, i.e. the 
therapeutic control is to specify the observer-based control in (3) such that the tracking error 
x(t) = x(t)-x d (t) must be as small as possible under the influence of uncertain exogenous 
pathogens and environmental disturbances w(t) and measurement noises n(t). Since the 
measurement noises n(t), the exogenous pathogens and environmental disturbances w(t) are 
uncertain and the reference signal r(t) could be arbitrarily assigned, the robust Hoc tracking 
control design in (3) should be specified so that the stochastic effect of three uncertainties 
w(t), n(t) and r(t) on the tracking error could be set below a prescribed value p 2 , i.e. both the 
stochastic Hoc reference tracking and Hoc state estimation should be achieved simultaneously 
under uncertain w(t),n(t) and r(t). 



f f (x T (t)Q,m + e T (t)Q 2 e(t))dt 



f f/ (w T (t)w(t) + n T (t)n(t) + r T (t)r(t))dt 

JO 



(5) 



where the weighting matrices Q; are assumed to be diagonal as follows 



Qi 



q l n 














q 22 














fe 














<?44 



z = l,2. 



The diagonal element q J u of Qi denotes the punishment on the corresponding tracking error 
and estimation error. Since the stochastic effect of w(t), r(t) and n(t) on tracking error x(t) 



Robust H M Tracking Control of Stochastic Innate Immune System Under Noises 



95 



and estimation error e(t) is prescribed below a desired attenuation level p from the energy 
point of view, the robust Hoc stochastic tracking problem of equation (5) is suitable for the 
robust Hx, stochastic tracking problem under environmental disturbances w(t), measurement 
noises n(t) and changeable reference r(t), which are always met in practical design cases. 
Remark 1: 

If the environmental disturbances w(t) and measurement noises n(t) are deterministic 
signals, the expectative symbol E[»] in (5) can be omitted. 

e(t)' 

x(t) 

k/(')J 



Let us denote the augmented vector x 
augmented stochastic system as 



then we get the dynamic equation of the 



x(t) = 



e(t) ■ 
x(t) 



'fix) - fix) + k(x - x d )igix) - g(x)) + l(x)(c(x) - c(x)j 
f(x) + k(x-x d )g{x) 



A d x d 





"I 0" 


[*(*)] 


+ 


D 


w(t) 




I 


lm\ 



The augmented stochastic system above can be represented in a general form by 

x(t) = F(x(t)) + Dw(t) 



~f( x ) ~ f(x) + k ( x ~ x d)(g( x ) ~ g( x )) + K X M X ) ~ c ( x )) 
f(x) + k(x-x d )g(x) 

A d x d 
The robust H*, stochastic tracking performance in (5) can be represented by 



whei 


re F(x(t)) = 




~I 0" 




D = 


D 
I 





w(t)- 



n(t) 
w(t) 
r(t) 



\ tf x T (t)Qx(t)dt 



f f w T (t)w(t)dt 



< p 2 if x(0) = 



or E 



f f x T (t)Qx(t)dt <p 2 E j tf w T (t)w(t)dt 



where Q - 



Qi 



o 



o 



(6) 



(7) 



and 



(8) 



Qi -Qi 
-Qx Q 1 _ 

If the stochastic initial condition x(0) *■ and is also considered in the He tracking 
performance, then the above stochastic H„ inequality should be modified as 



J o f/ x T it)Qxit)dt\ < E[V(x(0))] + p 2 E\f o f w T it)wit)dt 



(9) 



96 Robust Control, Theory and Applications 

for some positive function V(x(0)) . Then we get the following result. 

Theorem 1: If we can specify the control gain k(x-x d ) and observer gain l(x) in the 
observer-based control law in (3) for stochastic immune system (2) such that the following 
HJI has a positive solution V(x(t)) > 



x(t) T Qx(t) + 



dx(t) 



\T 



4p 2 



dx(t) 



dV(x) ^ 

dx(t) ) 



< (10) 



Then the robust stochastic Hoc tracking performance in (5) is achieved for a prescribed 

tracking performance p 2 . 

Proof: see Appendix A. 

Since p 2 is a prescribed noise attenuation level of Hoo tracking performance in (5), based on 

the analysis above, the optimal stochastic Hoo tracking performance still need to minimize 

p 2 as follows 

Po= ™ n n P 2 (11) 

V(x)>0 

subject to V(x(t))>0 and equation (10). 

At present, there does not exist any analytic or numerical solution for (10) or (11) except in 

very simple cases. 

4. Robust fuzzy observer-based tracking control design for stochastic innate 
immune system 

Because it is very difficult to solve the nonlinear HJI in (10), no simple approach is available 
to solve the constrained optimization problem in (11) for robust model tracking control of 
stochastic innate immune system. Recently, the fuzzy T-S model has been widely applied to 
approximate the nonlinear system via interpolating several linearized systems at different 
operating points (Chen et al., 1999,2000; Takagi & Sugeno, 1985). Using fuzzy interpolation 
approach, the HJI in (10) can be replaced by a set of linear matrix inequalities (LMIs). In this 
situation, the nonlinear stochastic Hoo tracking problem in (5) could be easily solved by fuzzy 
method for the design of robust Hoo tracking control for stochastic innate immune response 
systems. 

Suppose the nonlinear stochastic immune system in (1) can be represented by the Takagi- 
Sugeno (T-S) fuzzy model (Takagi & Sugeno, 1985). The T-S fuzzy model is a piecewise 
interpolation of several linearized models through membership functions. The fuzzy model 
is described by fuzzy If-Then rules and will be employed to deal with the nonlinear Hoo 
tracking problem by fuzzy observer-based control to achieve a desired immune response 
under stochastic noises. The z-th rule of fuzzy model for nonlinear stochastic immune 
system in (1) is in the following form (Chen et al., 1999,2000). 
Plant Rule i: 

If z x (i) is F n and ... and z At) is F ig , 

then x(t) = A i x(t) + B i u(t) + Dw(t), i - 1,2,3,- •• ,L 

y(t) = C iX (t) + n(t) (12) 



Robust hL Tracking Control of Stochastic Innate Immune System Under Noises 97 

in which R is the fuzzy set; A z , B z , and C z are known constant matrices; L is the number 
of If-Then rules; g is the number of premise variables; and z 1 (t),z 2 (t),...,z g (t) are the 
premise variables. The fuzzy system is inferred as follows (Chen et al., 1999,2000; Takagi & 
Sugeno,1985) 

= X/z f (z(0)[A f x(0 + B f u(*) + Dw(t)] (13) 



where [ii ( z (t)) = f[F ij (z j (t)) / h i (z(t)) = -g^ , z(t) = [z 1 (t),z 2 (t),...,z g (t)] , and 

R(z(£)) is the grade of membership of z ; (£) in R . 
We assume 



u,(z(£))>0 and^ h .(z(f))>0 (14) 

Therefore, we get 



L 

I 



/z z (z(0)>0 and£/z f (z(0) = l (15) 

i=l 

The T-S fuzzy model in (13) is to interpolate L stochastic linear systems to approximate the 
nonlinear system in (1) via the fuzzy basis functions h ( (z(t)) . We could specify the 

L L 

parameter A z and B- easily so that ^fy(z(£))A-x(£) and ^h i (z(t))B i in (13) can 

approximate F(x(t)) and g(x(t)) in (2) by the fuzzy identification method (Takagi & 

Sugeno, 1985). 

By using fuzzy If-Then rules interpolation, the fuzzy observer is proposed to deal with the 

state estimation of nonlinear stochastic immune system (1). 

Observer Rule i: 

If z x (i) is F n and ... and zJt) is jR, 

then x(t) = A i x(t) + B i u(t) + L i (y(t)-y(t)) / i = 1,2,3,—, L (16) 

where L; is the observer gain for the zth observer rule and y(t) = ^ l _ 1 h i (z(t))C i x(t) . 
The overall fuzzy observer in (16) can be represented as (Chen et al., 1999,2000) 

i(f ) = Yjx x (z(t ))[A t x(t ) + B,.«(t ) + L, (y(t) - y(t ))] (17) 

i=l 



98 



Robust Control, Theory and Applications 



Suppose the following fuzzy observer-based controller is employed to deal with the above 
robust Hoc tracking control design 
Control Rule /: 

If z x (i) is F a and ... and z At) is F , 



then M = 2> ; .(z(0)K ; (x(f)-* d (f)) 

7=1 



(18) 



Remark 2: 

1. The premise variables z(£) can be measurable stable variables, outputs or combination of 
measurable state variables (Ma et al., 1998; Tanaka et al, 1998; Wang et al, 1996). The 
limitation of this approach is that some state variables must be measurable to construct 
the fuzzy observer and fuzzy controller. This is a common limitation for control system 
design of T-S fuzzy approach (Ma et al., 1998; Tanaka et al., 1998). If the premise 
variables of the fuzzy observer depend on the estimated state variables, i.e., z(t) instead 
of z(t) in the fuzzy observer, the situation becomes more complicated. In this case, it is 
difficult to directly find control gains Kj and observer gains U. The problem has been 
discussed in (Tanaka et al, 1998). 

2. The problem of constructing T-S fuzzy model for nonlinear systems can be found in 
(Kim et al, 1997; Sugeno & Kang, 1988). 

Let us denote the estimation errors as e(t) = x(t)-x(t) . The estimation errors dynamic is 
represented as 

e(t) = x(t)-x(t) 

After manipulation, the augmented system in (6) can be expressed as the following fuzzy 
approximation form 



L 

I 

i=l 



Ht) = 2>,(z(f))Zfc 7 (z(f))[A, y x(f) + Ei w(t)] 

;=1 



(19) 



where A„ 



A,-L ; C, 

-B..K; 





A,+B,K ; . -B,K ; 
A, 



,m- 



e(t) 




n(t) 




-L, D 


x(t) 


, w(t) = 


w(t) 


>E,= 


D 


x d (t) 




r(t) 




I 



Theorem 2: In the nonlinear stochastic immune system of (2), if P = P T > is the common 
solution of the following matrix inequalities: 



AiP + PAg- 



\pE i EjP + Q<Q, i,;=l,2,-,L 



(20) 



then the robust Hoo tracking control performance in (8) or (9) is guaranteed for a prescribed p 2 



Robust H M Tracking Control of Stochastic Innate Immune System Under Noises 



99 



In the above robust Hoo tracking control design, we don't need the statistics of disturbances, 
measurement noises and initial condition. We only need to eliminate their effect on the 
tracking error and state estimation error below a prescribed level p 2 . To obtain the best Hoo 
tracking performance, the optimal Hoo tracking control problem can be formulated as the 
following minimization problem. 



Po=^P 2 



(21) 



subject to P > and equation (20). 

Proof: see Appendix B. 

In general, it is not easy to analytically determine the constrained optimization problem in 

(21). Fortunately, the optimal Hoo tracking control problem in (21) can be transferred into a 

minimization problem subject to some linear matrix inequalities (LMIs). The LMIP can be 

solved by a computationally efficient method using a convex optimization technique (Boyd, 

1994) as described in the following. 

By the Schur complements (Boyd, 1994), equation (20) is equivalent to 



A^P + PA^ + Q 
FP 



PL 
-p^HH 7 )- 1 



<0 



(22) 



where L = 



\ Li 


I 0" 





I 





I 



and H = 



-I 





0" 





D 











I 



For the convenience of design, we assume P = 
obtain 









P 22 









£33. 



and substitute it into (22) to 



j 21 



± n 




D 12 

s 22 



D ?>2 



±22 







^23 

S33 









-P 2 I 



±22 






-p^DD 7 )- 1 







£33 



-P 2 I 



<0 



(23) 



where S v 



- AfPn + Pu A, -CjzJ-Z t C j+ Q 2 
-- S 21 = -P 22 B,K ; . 



S 22 = (A, + B,K ; ) T P 22 + P 22 (A ; + B,K ; ) + Q, 



-P 22 B,K ; . 



Qi 



S33 = AjP 33 + P 33 A d + Q 1 and Z t = P^ 



100 



Robust Control, Theory and Applications 



Since five parameters Pn, P22, P33, Kj, and U should be determined from (23) and they are 

highly coupled, there are no effective algorithms for solving them simultaneously till now. 

In the following, a decoupled method (Tseng, 2008) is provided to solve these parameters 

simultaneously. 

Note that (23) can be decoupled as 





Pll 





^32 



P 22 




^33 






z l 




-P 2 I 









-p^DD 7 )- 1 







^33 




-P 2 I 



-Z i C + Q 2 +yP 22 



Z T 

P11 





-Yi' + Qi 

-Qi 









Zr 


Pll 





-Qi 











AjPga + PgaAi 

+Qi+r^22 








^33 





-P 2 I 














-p^DD 7 )" 1 

+yP 22 





Pp,?, 








-P 2 



(24) 



"7^22 

(-P 22 B l K j ) T 








-P 22 B^ 

(A 2+ B^ ; ) T P 2 2 
+ P 22 (A i+ B i K j ) + y 1 I 

(-^2 2 B^ ; ) r 


P 22 








where y and y 1 are some positive scalars. 

Lemma 1: 

If 



-P„B,JK, 



"7^22 










"7^22 









a 23 














<0 



(25) 



Robust H M Tracking Control of Stochastic Innate Immune System Under Noises 



101 



and 



then 











w 33 






b 21 






u 12 

b 22 
hi 

b 42 







^23 
^33 





u 24 





<0 



u 21 








^32 



hi 




^33 







^24 








<0 



Proof: see Appendix C. 

From the above lemma, it is obvious that if 



and 





At-CJZj 
+ yP22 





*i 


p n 










-Yii + Qi -Qi 














_q Ai P 33 + P 33 A d 

+Qi+yP 2 i 








£33 


Z T 





-P 2 I 








Pll 








-p'PD 7 )- 1 

+yP 22 








P33 








-P 2 I 




-7^22 


-P 22 B ! K ; . 














(-P 22 B,K ; ) T 


(A,+B,K ; ) T P 22 
+P 22 (A,+B,K,) + yi I 


-P 22 B f K ; 


P 22 


<0 









(-P 22 B,K ; ) T 


-y^22 














P 22 





-yP 22 _ 







(26) 



(27) 



<0 (28) 



(29) 



then (23) holds. 

Remark 3: 

Note that (28) is related to the observer part (i.e., the parameters are Pn, P22, P33, and U) and 

(29) is related to the controller part (i.e., the parameters are P22 and Kj), respectively. 

Although the parameters P22, Kj and y are coupled nonlinearly, seven parameters Pn, P22, 

P33, Kj, U, y and y 1 can be determined by the following arrangement. 



102 



Robust Control, Theory and Applications 



Note that, by the Schur complements (Boyd, 1994) equation (28) is equivalent to 





A, Pn + PnA, 


-C T Z T - 


-Z { C + 


Qi 











z { 













- 


Yl I + 


Qi 




-Qi 















-Qi 




AjP 


33+^Af+Ql ° 






zj 















-p 2 i 






p n 






































P33 ° 






I 






































1 



























P n 





I 














































£33 







I 






































-p'PD 7 )- 1 

















I 


<0 







-9 2 I 
























-y 


- a w 22 



























-y 


- a w 22 











I 















-y 


- a w 22 _ 




W„: 


= P™ 1 , and eqi 


lation OS 


)) is eq 


uivc 


dent to 









(30) 



(-B,.Y y ) T W 22 Aj + A,W„ + Y, T Bf + B ; Y ; 



(-B f y ; ) T 



w 



22 



w 



22 












-B.-Y; 


w 22 


w 22 


yW 22 











-yW 22 






-Ya 1 !. 



<0 



(31) 



where Y j = K ; W 22 . 



Therefore, if (30) and (31) are all held then (23) holds. Recall that the attenuation p can be 
minimized so that the optimal Hoc tracking performance in (21) is reduced to the following 
constrained optimization problem. 



Po= min p 2 



(32) 



subject to P n > , P 22 > , P 33 > , y > , y 1 > and (30)-(31). 

which can be solved by decreasing p 2 as small as possible until the parameters P n > , 
P 22 > , P33 > , y > and y 1 > do not exist. 



Robust hL Tracking Control of Stochastic Innate Immune System Under Noises 



103 



Remark 4: 

Note that the optimal Hoo tracking control problem in (32) is not a strict LMI problem since it 
is still a bilinear form in (30)-(31) of two scalars y and y 1 and becomes a standard linear 
matrix inequality problem (LMIP) (Boyd, 1994) if y and y 1 are given in advance. The 
decoupled method (Tseng, 2008) bring some conservatism in controller design. However, 
the parameters P n , P 22 = W^ , P 33 , K = YjW^ and L z = P{iZ { can be determined 
simultaneously from (32) by the decoupled method if scalars y and y 1 are given in 
advance. The useful software packages such as Robust Control Toolbox in Matlab (Balas et 
al., 2007) can be employed to solve the LMIP in (32) easily. 

In general, it is quite easy to determine scalars y and y 1 beforehand to solve the LMIP with 
a smaller attenuation level p 2 . In this study, the genetic algorithm (GA) is proposed to deal 
with the optimal Hoo tracking control problem in (32) since GA, which can simultaneously 
evaluate many points in the parameters space, is a very powerful searching algorithm based 
on the mechanics of natural selection and natural genetics. More details about GA can be 
found in (Jang et al, 1997). 

According to the analysis above, the Hoo tracking control of stochastic innate immune system 
via fuzzy observer-based state feedback is summarized as follows and the structural 
diagram of robust fuzzy observer-based tracking control design has shown in Fig. 3. 



Desired immune response 




Solving LMIs 



Kj 



X 



Fuzzy observer-based controller 

L 

u=Y h.{z)K.{x-x d ) 



T-S fuzzy model 
x = Y h t [A x +B i u +Dw] 



Nonlinear immune system 
x = f(x) + g(x)u +Dw 



Fuzzy observer 

x = Y hXA x +B i u +L t(y -y)] 



c n 

/ 



u 



Solving LMIs 



Fig. 3. Structural diagram of robust fuzzy observer-based tracking control design. 



104 Robust Control, Theory and Applications 

Design Procedure: 

1. Provide a desired reference model in (4) of the immune system. 

2. Select membership functions and construct fuzzy plant rules in (12). 

3. Generate randomly a population of binary strings: With the binary coding method, the 
scalars y and y 1 would be coded as binary strings. Then solve the LMIP in (32) with 
scalars y and y 1 corresponding to binary string using Robust Control Toolbox in 
Matlab by searching the minimal value of p 2 . If the LMIP is infeasible for the 
corresponding string, this string is escaped from the current generation. 

4. Calculate the fitness value for each passed string: In this step, the fitness value is 
calculated based on the attenuation level p 2 . 

5. Create offspring strings to form a new generation by some simple GA operators like 
reproduction, crossover, and mutation: In this step, (i) strings are selected in a mating 
pool from the passed strings with probabilities proportional to their fitness values, (ii) 
and then crossover process is applied with a probability equal to a prescribed crossover 
rate, (iii) and finally mutation process is applied with a probability equal to a prescribed 
mutation rate. Repeating (i) to (iii) until enough strings are generated to form the next 
generation. 

6. Repeat Step 3 to Step 5 for several generations until a stop criterion is met. 

7. Based on the scalars y and y 1 obtained from above steps, one can obtain the 
attenuation level p 2 and the corresponding P n , P 22 = W 22 , P 33 , K = Y:W 22 and 
L { = P^Zi , simultaneously. 

8. Construct the fuzzy observer in (17) and fuzzy controller in (18). 

5. Computational simulation example 

Parameter Value Description 

a n 1 Pathogens reproduction rate coefficient 

a 12 1 The suppression by pathogens coefficient 

a 22 3 Immune reactivity coefficient 

a 23 1 The mean immune cell production rate coefficient 

x* 2 2 The steady-state concentration of immune cells 

a 31 1 Antibodies production rate coefficient 

a 32 1.5 The antibody mortality coefficient 

a 33 0.5 The rate of antibodies suppress pathogens 

a 41 0.5 The organ damage depends on the pathogens damage 

possibilities coefficient 
a 42 1 Organ recovery rate 
b-y -1 Pathogen killer's agent coefficient 
b 2 1 Immune cell enhancer coefficient 
b 3 1 Antibody enhancer coefficient 
fc 4 -1 Organ health enhancer coefficient 
c x 1 Immune cell measurement coefficient 
c 2 1 Antibody measurement coefficient 
c^ 1 Organ health measurement coefficient 

Table 1. Model parameters of innate immune system (Marchuk, 1983; Stengel et al., 2002b). 



Robust H M Tracking Control of Stochastic Innate Immune System Under Noises 



105 



We consider the nonlinear stochastic innate immune system in (1), which is shown in Fig. 1. 
The values of the parameters are shown in Table 1. The stochastic noises of immune systems 
are mainly due to measurement errors, modeling errors and process noises (Milutinovic & 
De Boer, 2007). The rate of continuing introduction of exogenous pathogen and environmental 
disturbances w 1 ~ w 4 are unknown but bounded signals. Under infectious situation, the 
microbes infect the organ not only by an initial concentration of pathogen at the beginning 
but also by the continuous exogenous pathogens invasion w 1 and other environmental 
disturbances w 2 ~ iv 4 . In reality, however, the concentration of invaded pathogens is hardly 
to be measured. So, we assume that only immune cell, antibody, and organ health can be 
measured with measurement noises by medical devices or other biological techniques (e.g. 
immunofluorescence microscope). And then we can detect the numbers of molecules easily 
by using a fluorescence microscope (Piston, 1999). 

The dynamic model of stochastic innate immune system under uncertain initial states, 
environmental disturbances and measurement noises is controlled by a combined 
therapeutic control as 

x x = (1 - x 3 )x 1 -u 1 +w 1 

x 2 = 3a 21 (x 4 )x 1 x 3 - (x 2 -2)-u 2 +w 2 



a 4 



■ x 2 - (1.5 + 0.5x a )x 3 + u 3 
-- 0.5x a - x 4 + w 4 + w 4 



-Wo, 



(33) 



■- X, + Tin , Vn = X A + Ho 



fl 21 (* 4 ) 



x 2 +n 1 ,y 2 =x 3 +n 2 ,y 3 -* 4: 
cos(7ix 4 ), 0<x 4 <0.5 
0, 0.5 < x 4 



A set of initial condition is assumed x(0) = [3.5 2 1.33 0] . For the convenience of 
simulation, we assume that w x ~ w 4 are zero mean white noises with standard deviations 
being all equal to 2. The measurement noises n x ~ n 3 are zero mean white noises with 
standard deviations being equal to 0.1. In this example, therapeutic controls u x ~ w 4 are 
combined to enhance the immune system. The measurable state variables y 1 ~ y 3 with 
measurement noises by medical devices or biological techniques are shown in Fig. 4. 
Our reference model design objective is that the system matrix A d and r(t) should be 
specified beforehand so that its transient responses and steady state of reference system for 
stochastic innate immune response system are desired. If the real parts of eigenvalues of A d 
are more negative (i.e. more robust stable), the tracking system will be more robust to the 
environmental disturbances. After some numerical simulations for clinical treatment, the 
desired reference signals are obtained by the following reference model, which is shown in 
Fig. 5. 



**(*) = 



-1.1 
0-200 
0-40 
-1.5 



X d(t) + B d u step(t) 



(34) 



where B d =[0 4 16/3 0] and u st (t) is the unit step function. The initial condition is 
given by x d (0) = [2.5 3 1.1 0.8] T . 



106 



Robust Control, Theory and Applications 



Immune cell measurement y 1 




2.5 
2 

1.5 
1 



0.5 



,#T 



2 3 4 5 6 

Antibody measurement y 2 



IW^JS 



^^i^mj^ 1 



2 3 4 5 

Organ health measurement y 3 







3 4 5 

Time unit 



Fig. 4. The measurable state variables y 1 ~ y 3 with measurement noises n x ~ n 3 by medical 
devices or biological technique. 



3 



O 



o 
O 1 



i A 



Time Responses of Reference Model 

_g — x d1 Pathogens 



,_A-_ x,„ Immune cells 
,_^__ x.„ Antibodies 
-©— x d4 Organ 



^-0-^-0— 0—^--^—^— 0—^—0— 0- »-»-»-—«> 






°— ie— ■ o.-^ h-g-r- ^^ □ — a- — h_ 



3 4 5 

Time unit 

Fig. 5. The desired reference model with four desired states in (34): pathogens (x dl , blue, 
dashed square line), immune cells ( x d2 , green, dashed triangle line), antibodies ( x d3 , red, 
dashed diamond line) and organ ( x M , magenta, dashed, circle line) 

We consider the lethal case of uncontrolled stochastic immune system in Fig. 6. The 
pathogen concentration increases rapidly causing organ failure. We aim at curing the organ 
before the organ health index excesses one after a period of pathogens infection. As shown 
in Fig. 6, the black dashed line is a proper time to administrate drugs. 



Robust hL Tracking Control of Stochastic Innate Immune System Under Noises 



107 



The lethal case 




3 4 

Time unit 

Fig. 6. The uncontrolled stochastic immune responses (lethal case) in (33) are shown to 
increase the level of pathogen concentration at the beginning of the time period. In this case, 
we try to administrate a treatment after a short period of pathogens infection. The cutting 
line (black dashed line) is an optimal time point to give drugs. The organ will survive or fail 
based on the organ health threshold (horizontal dotted line) [x±<l: survival, X4>1: failure]. 

To minimize the design effort and complexity for this nonlinear innate immune system in 
(33), we employ the T-S fuzzy model to construct fuzzy rules to approximate nonlinear 
immune system with the measurement output y 3 and y 4 as premise variables. 
Plant Rule i: 

If y 3 is F n and y 4 is F i2 , then 

x(t) = A ( x(t) + Bu(t) + Dw(t), i = 1,2,3,--, L 

y(t) = Cx(t) + n(t) 

To construct the fuzzy model, we must find the operating points of innate immune 
response. Suppose the operating points for y 3 are at y 31 = -0.333 , y 32 = 1.667 , and 
y 33 = 3.667 . Similarly, the operating points for y 4 are at y 41 = , y 42 = 1 , and y 43 = 2 . For 
the convenience, we can create three triangle-type membership functions for the two 
premise variables as in Fig. 7 at the operating points and the number of fuzzy rules is L = 9 . 
Then, we can find the fuzzy linear model parameters A z in the Appendix D as well as other 
parameters B , C and D . In order to accomplish the robust FL tracking performance, we 
should adjust a set of weighting matrices Q 1 and Q 2 in (8) or (9) as 



Qi 



0.01 

0.01 

0.01 

0.01 



Qi 



0.01 

0.01 

0.01 

0.01 



After specifying the desired reference model, we need to solve the constrained optimization 
problem in (32) by employing Matlab Robust Control Toolbox. Finally, we obtain the 
feasible parameters y = 40 and y 1 = 0.02 , and a minimum attenuation level pQ = 0.93 and a 



108 



Robust Control, Theory and Applications 



common positive-definite symmetric matrix P with diagonal matrices P n , P 22 and P 33 as 
follows 





" 0.23193 


-1.5549e-4 


0.083357 


-0.2704 


Pu = 


-1.5549e-4 


0.010373 


-1.4534e-3 


-7.0637e-3 


0.083357 


-1.4534e-3 


0.33365 


0.24439 




-0.2704 


-7.0637e-3 


0.24439 


0.76177 




"0.0023082 


9.4449e-6 


-5.7416e-5 


-5.0375e-6" 


P 22 = 


9.4449e-6 


0.0016734 


2.4164e-5 


-1.8316e-6 


-5.7416e-5 


2.4164e-5 


0.0015303 


5.8989e-6 




-5.0375e-6 


-1.8316e-6 


5.8989e-6 


0.0015453 




1.0671 


-1.0849e-5 


3.4209e-5 


5.9619e-6 ~ 


^33 = 


-1.0849e-5 


1.9466 


-1.4584e-5 


1.9167e-6 


3.4209e-5 


-1.4584e-5 


3.8941 


-3.2938e-6 




5.9619e-6 


1.9167e-6 


-3.2938e-6 


1.4591 



The control gain K- and the observer gain L z can also be solved in the Appendix D. 





► *3 



Fig. 7. Membership functions for two premise variables y 3 and y 4 . 

Figures 8-9 present the robust FL tracking control of stochastic immune system under the 
continuous exogenous pathogens, environmental disturbances and measurement noises. 
Figure 8 shows the responses of the uncontrolled stochastic immune system under the initial 
concentrations of the pathogens infection. After the one time unit (the black dashed line), we 
try to provide a treatment by the robust FL tracking control of pathogens infection. It is seen 
that the stochastic immune system approaches to the desired reference model quickly. From 
the simulation results, the tracking performance of the robust model tracking control via T-S 
fuzzy interpolation is quite satisfactory except for pathogens state x\ because the pathogens 
concentration cannot be measured. But, after treatment for a specific period, the pathogens 
are still under control. Figure 9 shows the four combined therapeutic control agents. The 
performance of robust FL tracking control is estimated as 



l' f (i T (t)Qim + e T (t)Q 2 e(t))dt 



\ t >{w T (t)w(t) + n T (t)n(t) + r T (t)r{t))dt 



-. 0.033 <p 2 = 0.93 



Robust hL Tracking Control of Stochastic Innate Immune System Under Noises 



109 



Robust Hoo tracking control 

g x 1 Pathogens 



Take drugs 



^ x„ Immune cells 



G— x 4 Organ 

Reference response 




4 5 6 7 

Time unit 

Fig. 8. The robust Hoo tracking control of stochastic immune system under the continuous 
exogenous pathogens, environmental disturbances and measurement noises. We try to 
administrate a treatment after a short period (one time unit) of pathogens infection then the 
stochastic immune system approach to the desired reference model quickly except for 
pathogens state x\. 



Control Agents 




Time unit 

Fig. 9. The robust Hoo tracking control in the simulation example. The drug control agents u x 
(blue, solid square line) for pathogens, u 2 for immune cells (green, solid triangle line), u 3 
for antibodies (red, solid diamond line) and u 4 for organ (magenta, solid circle line). 

Obviously, the robust Hoo tracking performance is satisfied. The conservative results are due 
to the inherent conservation of solving LMI in (30)-(32). 



110 Robust Control, Theory and Applications 

6. Discussion and conclusion 

In this study, we have developed a robust Hoo tracking control design of stochastic immune 
response for therapeutic enhancement to track a prescribed immune response under 
uncertain initial states, environmental disturbances and measurement noises. Although the 
mathematical model of stochastic innate immune system is taken from the literature, it still 
needs to compare quantitatively with empirical evidence in practical application. For 
practical implementation, accurate biodynamic models are required for treatment 
application. However, model identification is not the topic of this paper. Furthermore, we 
assume that not all state variables can be measured. In the measurement process, the 
measured states are corrupted by noises. In this study, the statistic of disturbances, 
measurement noises and initial condition are assumed unavailable and cannot be used for 
the optimal stochastic tracking design. Therefore, the proposed Hoo observer design is 
employed to attenuate these measurement noises to robustly estimate the state variables for 
therapeutic control and Hoo control design is employed to attenuate disturbances to robustly 
track the desired time response of stochastic immune system simultaneity. Since the 
proposed Hoo observer-based tracking control design can provide an efficient way to create a 
real time therapeutic regime despite disturbances, measurement noises and initial condition 
to protect suspected patients from the pathogens infection, in the future, we will focus on 
applications of robust Hoo observer-based control design to therapy and drug design 
incorporating nanotechnology and metabolic engineering scheme. 

Robustness is a significant property that allows for the stochastic innate immune system to 
maintain its function despite exogenous pathogens, environmental disturbances, system 
uncertainties and measurement noises. In general, the robust Hooobserver-based tracking 
control design for stochastic innate immune system needs to solve a complex nonlinear 
Hamilton-Jacobi inequality (HJI), which is generally difficult to solve for this control design. 
Based on the proposed fuzzy interpolation approach, the design of nonlinear robust Hoo 
observer-based tracking control problem for stochastic innate immune system is 
transformed to solve a set of equivalent linear Hoo observer-based tracking problem. Such 
transformation can then provide an easier approach by solving an LMI-constrained 
optimization problem for robust Hoo observer-based tracking control design. With the help 
of the Robust Control Toolbox in Matlab instead of the HJI, we could solve these problems 
for robust Hoo observer-based tracking control of stochastic innate immune system more 
efficiently. From the in silico simulation examples, the proposed robust Hoo observer-based 
tracking control of stochastic immune system could track the prescribed reference time 
response robustly, which may lead to potential application in therapeutic drug design for a 
desired immune response during an infection episode. 

7. Appendix 

7.1 Appendix A: Proof of Theorem 1 

Before the proof of Theorem 1, the following lemma is necessary. 
Lemma 2: 

For all vectors a, (3 e R nxl , the following inequality always holds 

a T (3 + (3 T a < — a T a + p 2 (3 T p for any scale value p > . 

P 
Let us denote a Lyapunov energy function V(x(t)) > . Consider the following equivalent 
equation: 



Robust H M Tracking Control of Stochastic Innate Immune System Under Noises 



111 



£' x T (t)Qx(t)dt\ = E[V(x(0))] - E[V(z(oo))] + E f (x T (t)Qx(t) + ^|^1 ^ 



By the chain rule, we get 
dVW)) . 



ax(o J ^ ~[ ax(o 



(F(x(0) + D^(0) 



Substituting the above equation into (Al), by the fact that V(x(oo)) > , we get 



E[{V(t)Qx(0*]<E[V(x(0))] + E 
By Lemma 2, we have 



f T 



{ 3x(t) ) 



dt 



(Al) 



(A2) 



(A3) 



mm) 

dx(t) 



w 2^ Dx(t) ) w 2 w Sx(f) 
±P \ 5x(f) J <3x(f) 



(A4) 



Therefore, we can obtain 

J o V(f)Qx(Orffl<E[V(x(0))] + E 



f' 

Jo 



V(^(f )+ f^™Tw)) 



4 P 2 



mm) 

{. 8x(t) 



dx(t) ) 



55T 8v<mi +p2 _ T{t)m 



dx(t) 



dt 



(A5) 



By the inequality in (10), then we get 



J"' x T mx(t)dt] < E[V(x(0))] + p^ff w T (t)w(t)dt 



(A6) 



If x(0) = , then we get the inequality in (8). 

7.2 Appendix B: Proof of Theorem 2 

Let us choose a Lyapunov energy func 
equation (Al) is equivalent to the following: 



Let us choose a Lyapunov energy function V(x(t)) = x T (t)Px(t) > where P = P T > . Then 



f x T (f)Qx(f)*l = E[V(x(0))] -E[V(x(»))] + Eff (x T (f)Qx(f) + 2x T (f)Px(f)) dt 



:E[V(x(0))] + E 
= E[V(x(0))] + E 



Jo 



x T (t)Qx(t) + 2x T (t)P 



L 



X^W0)Z^W0)[a 2; x(0 + e^(0] 



df 



(A7) 



f x T (0Qx(0 + Z^W0)Z^W0)[2x T (0PA 2; x(f) + 2x T (0PE^(0] 



df 



112 



Robust Control, Theory and Applications 



By Lemma 2, we have 



2 *< mm) = *' mm) + & w! m) < j_^ {t)Pmm + p2 - T{t) - {t) (A8) 

P 
Therefore, we can obtain 



+ \x T (t)PE i EjPx(t) 



7=1 



+ p 2 w T (*)w(f) |<ft 



= E[V(*(0))] + E 



(A9) 



V < =1 -/ =1 



+ -^PE i E i i P 



x(t) + p 2 w T (t)w(t) \dt 



By the inequality in (20), then we get 

f f x T (t)Qx(t)dt <E[V(x(0))] + p 2 E \ tf w T {t)w{t)dt 
This is the inequality in (9). If x(0) = , then we get the inequality in (8). 



(A10) 



7.3 Appendix C: Proof of Lemma 1 

For \e x e 2 e 3 e 4 e 5 e 6 ] ^ , if (25)-(26) hold, then 



r -iT r 
e i 

e 2 

< 

e± 
. e 6_ 





a 23 









«36 






u 21 








^32 



b 42 




^33 







u 24 




b u 




0] 


w 





e 2 






> % 
e 4 





e 5 


0J 


%_ 



a n 




a 41 





^22 



a 14 a 



^23 



u 32 W33 



14 w 15 




a u 





55 






«36 






tu 



w 

e 2 




V 


T 


e 3 


+ 


^2 




^4 




% 




e 5 




_^5_ 




U>_ 









^11 
hi 

fc. 




^23 



'32 ^33 



'42 



" 


V 


b 24 


^2 





e 3 


^44 _ 


%_ 



<0 



This implies that (27) holds. Therefore, the proof is completed. 



Robust H M Tracking Control of Stochastic Innate Immune System Under Noises 



113 



7.4 Appendix D: Parameters of the Fuzzy System, control gains and observer gains 

The nonlinear innate immune system in (33) could be approximated by a Takagi-Sugeno 
Fuzzy system. By the fuzzy modeling method (Takagi & Sugeno, 1985), the matrices of the 
local linear system A z , the parameters B , C , D , K and L z are calculated as follows: 















3 


-1 








-0.5 


1 


-1.5 





0.5 








-1 


-2 











9 


-1 








-1.5 


1 


-1.5 





0.5 








-1 


-4 











15 


-1 








-2.5 


1 


-1.5 





0.5 








-1 



. Ao 















3 


-1 








-0.5 


1 


-1.5 





0.5 








-1 


-2 








0" 


-9 


-1 








-1.5 


1 


-1.5 





0.5 








-1 


" -4 











-15 


-1 








-2.5 


1 


-1.5 





0.5 








-1 









3 


-1 


-0.5 


1 


0.5 





-2 





9 


-1 


-1.5 


1 


0.5 





" -4 





15 


-1 


-2.5 


1 


0.5 












0" 


-1 








1 


-1.5 











-1 






0" 








1.5 








-1 







-1.5 

-1 



B = 



-1 





0" 








-1 
1 






, c = 








-1 








1 





0" 










1 





, D = 











1 





1 








0" 





1 














1 














1 



17.712 0.14477 
0.20163 18.201 
0.51947 -0.31484 



-0.43397 
0.37171 
-13.967 



0.28847 0.0085838 0.046538 



0.18604 

-0.00052926 

-0.052906 

14.392 



, ; = l,-,9 



L,= 



12.207 -26.065 22.367 

93.156 -8.3701 7.8721 

-8.3713 20.912 -16.006 

7.8708 -16.005 14.335 



i = l, 



8. References 

Asachenkov, A.L. (1994) Disease dynamics. Birkhauser Boston. 

Balas, G., Chiang, R., Packard, A. & Safonov, M. (2007) MATLAB: Robust Control Toolbox 3 

User's Guide. The MathWorks, Inc. 
Bell, D.J. & Katusiime, F. (1980) A Time-Optimal Drug Displacement Problem, Optimal 

Control Applications & Methods, 1, 217-225. 



114 Robust Control, Theory and Applications 

Bellman, R. (1983) Mathematical methods in medicine. World Scientific, Singapore. 
Bonhoeffer, S., May, R.M., Shaw, G.M. & Nowak, M.A. (1997) Virus dynamics and drug 

therapy, Proc. Natl Acad. Sci. USA, 94, 6971-6976. 
Boyd, S.P. (1994) Linear matrix inequalities in system and control theory. Society for Industrial 

and Applied Mathematics, Philadelphia. 
Carson, E.R., Cramp, D.G., Finkelstein, F. & Ingram, D. (1985) Control system concepts and 

approaches in clinical medicine. In Carson, E.R. & Cramp, D.G. (eds), Computers and 

Control in Clinical Medicine. Plenum Press, New York, 1-26. 
Chen, B.S., Chang, C.H. & Chuang, Y.J. (2008) Robust model matching control of immune 

systems under environmental disturbances: dynamic game approach, / Theor Biol, 

253,824-837. 
Chen, B.S., Tseng, C.S. & Uang, H.J. (1999) Robustness design of nonlinear dynamic systems 

via fuzzy linear control, IEEE Transactions on Fuzzy Systems, 7, 571-585. 
Chen, B.S., Tseng, C.S. & Uang, H.J. (2000) Mixed H-2/H-infinity fuzzy output feedback 

control design for nonlinear dynamic systems: An LMI approach, IEEE Transactions 

on Fuzzy Systems, 8, 249-265. 
Chizeck, H.J. & Katona, P.G. (1985) Closed-loop control. In Carson, E.R. & Cramp, D.G. 

(eds), Computers and Control in Clinical Medicine. Plenum Press, New York, 95-151. 
De Boer, R.J. & Boucher, C.A. (1996) Anti-CD4 therapy for AIDS suggested by mathematical 

models, Proc. Biol. Sci. , 263, 899-905. 
Gentilini, A., Morari, M., Bieniok, C, Wymann, R. & Schnider, T.W. (2001) Closed-loop 

control of analgesia in humans. Proc. IEEE Conf. Decision and Control. Orlando, 861- 

866. 
Janeway, C. (2005) Immunobiology : the immune system in health and disease. Garland, New 

York. 
Jang, J.-S.R., Sun, C.-T. & Mizutani, E. (1997) Neuro-fuzzy and soft computing : a computational 

approach to learning and machine intelligence. Prentice Hall, Upper Saddle River, NJ. 
Jelliffe, R.W. (1986) Clinical applications of pharmacokinetics and control theory: planning, 

monitoring, and adjusting regimens of aminoglycosides, lidocaine, digitoxin, and 

digoxin. In Maronde, R.F. (ed), Topics in clinical pharmacology and therapeutics. 

Springer, New York, 26-82. 
Kim, E., Park, M. & Ji, S.W. (1997) A new approach to fuzzy modeling, IEEE Transactions on 

Fuzzy Systems, 5, 328-337. 
Kirschner, D., Lenhart, S. & Serbin, S. (1997) Optimal control of the chemotherapy of HIV, /. 

Math. Biol, 35, 775-792. 
Kwong, G.K., Kwok, K.E., Finegan, B.A. & Shah, S.L. (1995) Clinical evaluation of long range 

adaptive control for meanarterial blood pressure regulation. Proc. Am. Control Conf., 

Seattle, 786-790. 
Li, T.H.S., Chang, S.J. & Tong, W. (2004) Fuzzy target tracking control of autonomous 

mobile robots by using infrared sensors, IEEE Transactions on Fuzzy Systems, 12, 

491-501. 
Lian, K.Y., Chiu, C.S., Chiang, T.S. & Liu, P. (2001) LMI-based fuzzy chaotic synchronization 

and communications, IEEE Transactions on Fuzzy Systems, 9, 539-553. 
Lydyard, P.M., Whelan, A. & Fanger, M.W. (2000) Instant notes in immunology. Springer, 

New York. 



Robust hL Tracking Control of Stochastic Innate Immune System Under Noises 115 

Ma, X.J., Sun, Z.Q. & He, Y.Y. (1998) Analysis and design of fuzzy controller and fuzzy 

observer, IEEE Transactions on Fuzzy Systems, 6, 41-51. 
Marchuk, G.I. (1983) Mathematical models in immunology. Optimization Software, Inc. 

Worldwide distribution rights by Springer, New York. 
Milutinovic, D. & De Boer, R.J. (2007) Process noise: an explanation for the fluctuations in 

the immune response during acute viral infection, Biophys J, 92, 3358-3367. 
Nowak, M. A. & May, R.M. (2000) Virus dynamics : mathematical principles of immunology and 

virology. Oxford University Press, Oxford. 
Nowak, M.A., May, R.M., Phillips, R.E., Rowland-Jones, S., Lalloo, D.G., McAdam, S., 

Klenerman, P., Koppe, B., Sigmund, K., Bangham, C.R. & et al. (1995) Antigenic 

oscillations and shifting immunodominance in HIV-1 infections, Nature, 375, 606- 

611. 
Parker, R.S., Doyle, J.F., III., Harting, J.E. & Peppas, N.A. (1996) Model predictive control for 

infusion pump insulin delivery. Proceedings of the 18th Annual International 

Conference of the IEEE Engineering in Medicine and Biology Society. Amsterdam, 1822- 

1823. 
Perelson, A.S., Kirschner, D.E. & De Boer, R. (1993) Dynamics of HIV infection of CD4+ T 

cells, Math. Biosci, 114, 81-125. 
Perelson, A.S., Neumann, A.U., Markowitz, M., Leonard, J.M. & Ho, D.D. (1996) HIV-1 

dynamics in vivo: virion clearance rate, infected cell life-span, and viral generation 

time, Science, 271, 1582-1586. 
Perelson, A.S. & Weisbuch, G. (1997) Immunology for physicists, Reviews of Modern Physics, 

69,1219-1267. 
Piston, D.W. (1999) Imaging living cells and tissues by two-photon excitation microscopy, 

Trends Cell Biol, 9, 66-69. 
Polycarpou, M.M. & Conway, J.Y. (1995) Modeling and control of drug delivery systems 

using adaptive neuralcontrol methods. Proc. Am. Control Conf, Seattle, 781-785. 
Robinson, D.C. (1986) Topics in clinical pharmacology and therapeutics. In Maronde, R.F. 

(ed), Principles of Pharmacokinetics. Springer, New York, 1-12. 
Rundell, A., HogenEsch, H. & DeCarlo, R. (1995) Enhanced modeling of the immune system 

to incorporate naturalkiller cells and memory. Proc. Am. Control Conf, Seattle, 255- 

259. 
Schumitzky, A. (1986) Stochastic control of pharmacokinetic systems. In Maronde, R.F. (ed), 

Topics in clinical pharmacology and therapeutics. Springer, New York, 13-25. 
Stafford, M.A., Corey, L., Cao, Y., Daar, E.S., Ho, D.D. & Perelson, A.S. (2000) Modeling 

plasma virus concentration during primary HIV infection, /. Theor. Biol, 203, 285- 

301. 
Stengel, R.F., Ghigliazza, R., Kulkarni, N. & Laplace, O. (2002a) Optimal control of innate 

immune response, Optimal Control Applications & Methods, 23, 91-104. 
Stengel, R.F., Ghigliazza, R.M. & Kulkarni, N.V. (2002b) Optimal enhancement of immune 

response, Bioinformatics, 18, 1227-1235. 
Sugeno, M. & Kang, G.T. (1988) Structure identification of fuzzy model, Fuzzy Sets and 

Systems, 28, 15-33. 
Swan, G.W. (1981) Optimal-Control Applications in Biomedical-Engineering - a Survey, 

Optimal Control Applications & Methods, 2, 311-334. 



116 Robust Control, Theory and Applications 

Takagi, T. & Sugeno, M. (1985) Fuzzy Identification of Systems and Its Applications to 

Modeling and Control, IEEE Transactions on Systems Man and Cybernetics, 15, 116- 

132. 
Tanaka, K., Ikeda, T. & Wang, H.O. (1998) Fuzzy regulators and fuzzy observers: Relaxed 

stability conditions and LMI-based designs, IEEE Transactions on Fuzzy Systems, 6, 

250-265. 
Tseng, C.S. (2008) A novel approach to H-infinity decentralized fuzzy-observer-based fuzzy 

control design for nonlinear interconnected systems, IEEE Transactions on Fuzzy 

Systems, 16, 1337-1350. 
van Rossum, J.M., Steyger, O., van Uem, T., Binkhorst, G.J. & Maes, R.A.A. (1986) 

Pharmacokinetics by using mathematical systems dynamics. In Eisenfeld, J. & 

Witten, M. (eds), Modelling of biomedical systems. Elsevier, Amsterdam, 121-126. 
Wang, H.O., Tanaka, K. & Griffin, M.F. (1996) An approach to fuzzy control of nonlinear 

systems: Stability and design issues, IEEE Transactions on Fuzzy Systems, 4, 14-23. 
Wein, L.M., D'Amato, R.M. & Perelson, A.S. (1998) Mathematical analysis of antiretroviral 

therapy aimed at HIV-1 eradication or maintenance of low viral loads, /. Theor. 

Biol, 192, 81-98. 
Wiener, N. (1948) Cybernetics; or, Control and communication in the animal and the machine. 

Technology Press, Cambridge. 
Wodarz, D. & Nowak, M.A. (1999) Specific therapy regimes could lead to long-term 

immunological control of HIV, Proc. Natl. Acad. Sci. USA, 96, 14464-14469. 
Wodarz, D. & Nowak, M.A. (2000) CD8 memory, immunodominance, and antigenic escape, 

Eur. ]. Immunol, 30, 2704-2712. 
Zhou, K., Doyle, J.C. & Glover, K. (1996) Robust and optimal control. Prentice Hall, Upper 

Saddle River, N.J. 



Robust Hoc Reliable Control of 

Uncertain Switched Nonlinear 

Systems with Time-varying Delay 

Ronghao Wang 1 , Jianchun Xing 1 , 

Ping Wang 1 , Qiliang Yang 1 and Zhengrong Xiang 2 

2 PLA University of Science and Technology 

2 Nanjing University of Science and Technology 

China 



1. Introduction 

Switched systems are a class of hybrid system consisting of subsystems and a switching law, 
which define a specific subsystem being activated during a certain interval of time. Many 
real-world processes and systems can be modeled as switched systems, such as the 
automobile direction-reverse systems, computer disk systems, multiple work points control 
systems of airplane and so on. Therefore, the switched systems have the wide project 
background and can be widely applied in many domains (Wang, W. & Brockett, R. W., 1997; 
Tomlin, C. et al., 1998; Varaiya, P., 1993). Besides switching properties, when modeling a 
engineering system, system uncertainties that occur as a result of using approximate system 
model for simplicity, data errors for evaluation, changes in environment conditions, etc, also 
exit naturally in control systems. Therefore, both of switching and uncertainties should be 
integrated into system model. Recently, study of switched systems mainly focuses on 
stability and stabilization (Sun, Z. D. & Ge, S. S., 2005; Song, Y. et al, 2008; Zhang, Y. et al, 
2007). Based on linear matrix inequality technology, the problem of robust control for the 
system is investigated in the literature (Pettersson, S. & Lennartson, B., 2002). In order to 
guarantee Hx> performance of the system, the robust Hx> control is studied using linear matrix 
inequality method in the literature (Sun, W. A. & Zhao, J., 2005). 

In many engineering systems, the actuators may be subjected to faults in special 
environment due to the decline in the component quality or the breakage of working 
condition which always leads to undesirable performance, even makes system out of 
control. Therefore, it is of interest to design a control system which can tolerate faults of 
actuators. In addition, many engineering systems always involve time delay phenomenon, 
for instance, long-distance transportation systems, hydraulic pressure systems, network 
control systems and so on. Time delay is frequently a source of instability of feedback 
systems. Owing to all of these, we shouldn't neglect the influence of time delay and 
probable actuators faults when designing a practical control system. Up to now, research 
activities of this field for switched system have been of great interest. Stability analysis of a 
class of linear switching systems with time delay is presented in the literature (Kim, S. et al., 
2006). Robust Hoc control for discrete switched systems with time-varying delay is discussed 



118 Robust Control, Theory and Applications 

in the literature (Song, Z. Y. et al., 2007). Reliable guaranteed-cost control for a class of 
uncertain switched linear systems with time delay is investigated in the literature (Wang, R. 
et al., 2006). Considering that the nonlinear disturbance could not be avoided in several 
applications, robust reliable control for uncertain switched nonlinear systems with time 
delay is studied in the literature (Xiang, Z. R. & Wang, R. H., 2008). Furthermore, Xiang and 
Wang (Xiang, Z. R. & Wang, R. H., 2009) investigated robust Loo reliable control for uncertain 
nonlinear switched systems with time delay. 

Above the problems of robust reliable control for uncertain nonlinear switched time delay 
systems, the time delay is treated as a constant. However, in actual operation, the time delay 
is usually variable as time. Obviously, the system model couldn't be described appropriately 
using constant time delay in this case. So the paper focuses on the system with time-varying 
delay. Besides, it is considered that Hoo performance is always an important index in control 
system. Therefore, in order to overcome the passive effect of time-varying delay for 
switched systems and make systems be anti-jamming and fault-tolerant, this paper 
addresses the robust Hx> reliable control for nonlinear switched time-varying delay systems 
subjected to uncertainties. The multiple Lyapunov-Krasovskii functional method is used to 
design the control law. Compared with the multiple Lyapunov function adopted in the 
literature (Xiang, Z. R. & Wang, R. H., 2008; Xiang, Z. R. & Wang, R. H., 2009), the multiple 
Lyapunov-Krasovskii functional method has less conservation because the more system 
state information is contained in the functional. Moreover, the controller parameters can be 
easily obtained using the constructed functional. 

The organization of this paper is as follows. At first, the concept of robust reliable controller, 
y -suboptimal robust Hoo reliable controller and y -optimal robust Hoo reliable controller are 
presented. Secondly, fault model of actuator for system is put forward. Multiple Lyapunov- 
Krasovskii functional method and linear matrix inequality technique are adopted to design 
robust Hoo reliable controller. Meanwhile, the corresponding switching law is proposed to 
guarantee the stability of the system. By using the key technical lemma, the design problems 
of y -suboptimal robust Hoo reliable controller and y -optimal robust Hoo reliable controller 
can be transformed to the problem of solving a set of the matrix inequalities. It is worth to 
point that the matrix inequalities in the y -optimal problem are not linear, then we make use 
of variable substitute method to acquire the controller gain matrices and y -optimal problem 
can be transferred to solve the minimal upper bound of the scalar y . Furthermore, the 
iteration solving process of optimal disturbance attenuation performance y is presented. 
Finally, a numerical example shows the effectiveness of the proposed method. The result 
illustrates that the designed controller can stabilize the original system and make it be of Hoo 
disturbance attenuation performance when the system has uncertain parameters and 
actuator faults. 

Notation Throughout this paper, A T denotes transpose of matrix A, L 2 [0,oo) denotes 
space of square integrable functions on [0,oo) . \\x(t)\\ denotes the Euclidean norm. I is an 
identity matrix with appropriate dimension. diag{a i } denotes diagonal matrix with the 
diagonal elements a { , i = l,2,'--,q. S<0 (or S>0) denotes S is a negative (or positive) 
definite symmetric matrix. The set of positive integers is represented by Z + . A < B (or 
A > B ) denotes A-B is a negative (or positive) semi-definite symmetric matrix. * in 

A B] T 

represents the symmetric form of matrix, i.e. * = B . 
* C 



Robust H M Reliable Control of Uncertain Switched Nonlinear Systems with Time-varying Delay 119 

2. Problem formulation and preliminaries 

Consider the following uncertain switched nonlinear system with time-varying delay 

m = K(,f(t) + A drj(t) x(t - d(t)) + B a{t)U f (t) + D a(t) w{t) + f a(t) (x(t),t) (1) 

z(t) = C a(t) x{t) + G a(t)U f{t) + N„ (t) w(t) (2) 

x(f) = #),fe[-p,0] (3) 

where x(t) e R m is the state vector, w(t) e R q is the measurement noise, which belongs to 
L 2 [0,oo) , z(t) e R v is the output to be regulated, w (t) eR is the control input of actuator 
fault. The function a(t) :[0,oo) -^ N = {1,2,- --,1V} is switching signal which is deterministic, 
piecewise constant, and right continuous, i.e. a(t) :{(0,cr(0)),(^ 1 , <j(t x )),•••, (t k ,cr(t k ))},k eZ + , 
where t k denotes the k th switching instant. Moreover, a(t) = i means that the i th 
subsystem is activated, N is the number of subsystems. </>(t) is a continuous vector-valued 
initial function. The function d(t) denotes the time-varying state delay satisfying 
< d(t)< p <oo,d(t)< ju <1 for constants p , ju , and /j-(v): R m xR^R m for ieN are 
unknown nonlinear functions satisfying 

||/;(x(o,f)||^IMO|| (4) 

where U i are known real constant matrices. 

The matrices A i , A di and B { for i e N are uncertain real-valued matrices of appropriate 

dimensions. The matrices A i , A di and B i can be assumed to have the form 

[A i ,A, ! ,B ! ] = [A ! ^*^ ! .] + H j F j (t)[E li ,E, i ,E 2! ] (5) 

where A z ,A^,B z ,H z ,E lz ,E^ and E 2i for ieN are known real constant matrices with proper 
dimensions, H z ,E lz ,E^ and E 2i denote the structure of the uncertainties, and F^t) are 
unknown time-varying matrices that satisfy 

Fim(t)<i (6) 

The parameter uncertainty structure in equation (5) has been widely used and can represent 
parameter uncertainty in many physical cases (Xiang, Z. R. & Wang, R. H., 2009; Cao, Y. et 
al, 1998). 

In actual control system, there inevitably occurs fault in the operation process of actuators. 
Therefore, the input control signal of actuator fault is abnormal. We use u(t) and w (t) to 
represent the normal control input and the abnormal control input, respectively. Thus, the 
control input of actuator fault can be described as 

uf(t) = M,u(t) (7) 

where M t is the actuator fault matrix of the form 

M i =diag{m il ,m i2 ,~-,m il ], < m ik < m ik < m ik , m ik >l, k = l,2,~',l (8) 



120 Robust Control, Theory and Applications 

For simplicity, we introduce the following notation 

M i0 =diag{m il ,m i2 ,'--,fh il } (9) 

Ji= di "g{jil>ji2>~',ju} ( 10 ) 

L i =diag{l il ,l i2 ,.-,l il } (11) 



i 1/- x • m ik~ m ik i m ik~ 

where m ik = -{m ik + m ik ) , ] ik = _ lK ~ lK , l ik = -^- 



2\ IK — IK/ ' J IK — 'IK ~ 

m ik + mik m ik 

By equation (9)-(ll), we have 

M,=M, (I + L,),|L,|<J,<7 (12) 

where |L f | represents the absolute value of diagonal elements in matrix L z , i.e. 

M=^{|z n |,y,-,y 

Remark 1 m zfc = 1 means normal operation of the k th actuator control signal of the i th 

subsystem. When m ik = , it covers the case of the complete fault of the k th actuator control 

signal of the i th subsystem. When m ik > and m ik ^ 1 , it corresponds to the case of partial 

fault of the k th actuator control signal of the i th subsystem. 

Now, we give the definition of robust fix, reliable controller for the uncertain switched 

nonlinear systems with time- varying delay. 

Definition 1 Consider system (1) with w(t ) = . If there exists the state feedback controller 

u(t) = K a ^x(t) such that the closed loop system is asymptotically stable for admissible 

parameter uncertainties and actuator fault under the switching law <j(t) , u(t) = K a ^x(t) is 

said to be a robust reliable controller. 

Definition 2 Consider system (l)-(3). Let />0 be a positive constant, if there exists the 

state feedback controller u(t) = K a ^x(t) and the switching law a(t) such that 

i. With w(t ) = , the closed system is asymptotically stable. 

ii. Under zero initial conditions, i.e. x(t) = (t e [-/?,0]) , the following inequality holds 

|z(0| 2 < r\H)\\ 2 > V M0 g l 2 [o /Q o) , w(t) * o (13) 

u(t ) = K a ^x(t ) is said to be y -suboptimal robust Hoc reliable controller with disturbance 
attenuation performance y . If there exists a minimal value of disturbance attenuation 
performance y , u(t) = K a ^x(t) is said to be y -optimal robust H*, reliable controller. 
The following lemmas will be used to design robust Hoc reliable controller for the uncertain 
switched nonlinear system with time-varying delay. 

Lemma 1 (Boyd, S. P. et al., 1994; Schur complement) For a given matrix s = 



S n S 12 
S 21 S 22 



with S n = S^ , S 22 = S \ 2 , S 12 = S \\ , then the following conditions are equivalent 
i S<0 

II iZ?-i i \ u i '-'22 21 1 1 1 2 

III ^22 / 11 12 22 21 



Robust H M Reliable Control of Uncertain Switched Nonlinear Systems with Time-varying Delay 121 

Lemma 2 (Cong, S. et al., 2007) For matrices X and Y of appropriate dimension and Q > , 
we have 

X T Y + Y T X < X T QX + Y T Q~ 1 Y 

Lemma 3 (Lien, C.H., 2007) Let Y,D,E and F be real matrices of appropriate dimensions 
with F satisfying F T =F , then for all F T F < I 

Y + DFE + E T F T D T < 
if and only if there exists a scalar s > such that 

Y + eDD t + s~ x F?F < 

Lemma 4 (Xiang, Z. R. & Wang, R. H., 2008) For matrices R lf R 2 , the following inequality 
holds 

R 1 Z(t)R 1 + R T 2 I T {t)Rl < pR x URl + p- 1 R\UR 1 

where j3 > , Z(i) is time-varying diagonal matrix, U is known real constant matrix 

satisfying \Z(t)\ < U , \£(t)\ represents the absolute value of diagonal elements in matrix 

S{t). 

Lemma 5 (Peleties, P. & DeCarlo, R. A., 1991) Consider the following system 

m=U)(m (i4) 

where cr(t) :[0,oo) -» N = {1,2,- --,N} . If there exist a set of functions V t :R m -> R, z e N such 
that 

(i) V z is a positive definite function, decreasing and radially unbounded; 
(ii) &V { (x(t))/dt = (dV i /dx)fi (x)<0 is negative definite along the solution of (14); 
(iii) Vj(x(t k ))<Vi(x(t k )) when the i th subsystem is switched to the ; th subsystem i,jsN, 
i * j at the switching instant t k , k = Z + , then system (14) is asymptotically stable. 

3. Main results 

3.1 Condition of stability 

Consider the following unperturbed switched nonlinear system with time-varying delay 

x(t) = A a{t) x{t) + A Mt) x{t - d(t)) + f a(t) (x(t), t) (15) 

x(t) = #t),tel-p,0] (16) 

The following theorem presents a sufficient condition of stability for system (15) -(16). 
Theorem 1 For system (15)-(16), if there exists symmetric positive definite matrices F U Q, 
and the positive scalar 8 such that 

F { < SI (17) 



122 



Robust Control, Theory and Applications 



AjPi + P^+R+Q + SUfUi P,A dl 



<0 



(18) 



-(I-A)QJ 
where i * j, i,j e N , then systems (15)-(16) is asymptotically stable under the switching law 

o(t) = argmin{x T (f)f;x(f)} • 

z'eN 

Proof For the i th subsystem, we define Lyapunov-Krasovskii functional 

V,(x(t)) = X T (t)PMt) + H m X T (T)Qx(T)dT 

where P { ,Q are symmetric positive definite matrices. Along the trajectories of system (15), 
the time derivative of V t (t) is given by 

V,(x(t)) = x T (t)P lX (t) + x T (t)P,x(t) + x T (t)Qx(t) - (1 - d(t))x T (t - d(t))Qx(t - d(t)) 
< x T (t)P,x(t) + x r {t)P i x{t) + x T (t)Qx(t) - (1 - ju)x T (t - d(t))Qx(t - d(t)) 
= 2x T (t)P,[A,x(t) + A di x(t - d(t)) + Mx(t),t)] + x T (t)Qx(t) 

-(l-ju)x T (t-d(t))Qx(t-d(t)) 
= x T (t)(AjP, + P,A, +Q)x(t) + 2x T (t)P,A d ,x(t - d(t)) + 2* T (t)P,/,(*(t),') 
-(l-ju)x T (t-d(t))Qx(t-d(t)) 
From Lemma 2, it is established that 

ix T (t )p i / i ( X (t), t) < x 1 {mm + f! w), t)pj { ( X (t), t) 

From expressions (4) and (17), it follows that 

ixTitWMW^x^PMn+sfhmMimV^xTim+mJuMt) 



Therefore, we can obtain that 



V, (x(t)) < x T (t)(A[P, + P,A, +Q + P.+ SUjU,)x(t) + 2x T (t)P,A dl x(t - d(t)) 
-(l-ju)x T (t-d(t))Qx(t-d(t)) 



where rj = 



x(t) 
x(t-d(t)) 
From (18), we have 



6>,= 



AjP i+ PA +Q + P i +SUjU i P t A dl 

-(1-/0(2. 



i +diag{P j -P i ,G\<O 



(19) 



Using rj and rj to pre- and post- multiply the left-hand term of expression (19) yields 



v 1 (x(t))<x T (t)(P,-P j Mt) 



(20) 



Robust H„ Reliable Control of Uncertain Switched Nonlinear Systems with Time-varying Delay 123 

The switching law a(t) = argmin{;t r (f)P ; x(f)} expresses that for i,j eN,i* j , there holds the 
inequality " 



(20) and (21) lead to 



x T (t)P,x(t)<x T (t)P jX (t) 



W))<0 



(21) 



(22) 



Obviously, the switching law <j(t) = argmin{x T (t)Px(t)} also guarantees that Lyapunov- 

Krasovskii functional value of the activated subsystem is minimum at the switching instant. 
From Lemma 5, we can obtain that system (15)-(16) is asymptotically stable. The proof is 
completed. 

■ 

Remark 2 It is worth to point that the condition (21) doesn't imply P { < P , for the state x(t) 
doesn't represent all the state in domain R m but only the state of the i th activated subsystem. 



3.2 Design of robust reliable controller 

Consider system (1) with w(t) = 

m = K ( ,f(t) + A d!T(t) x(t - d(t)) + K (t) u f {t) + L ( ,)(mt) 

x(t) = tft),te[-p,0] 

By (7), for the i th subsystem the feedback control law can be designed as 

uf(t) = M i K i x(t) 
Substituting (25) to (23), the corresponding closed-loop system can be written as 

x(t) = A { x(t) + A di x(t - d(t)) + f { (x(t), t) 
where A { = A { + ^M^ , i e N . 

The following theorem presents a sufficient existing condition of the robust reliable 
controller for system (23) -(24). 

Theorem 2 For system (23)-(24), if there exists symmetric positive definite matrices X if S , 
matrix Y i and the positive scalar X such that 



(23) 
(24) 

(25) 

(26) 



X, > XI 



(27) 



f, 


Ass 


Hi 


fj 


x* 


x> 


x { uj 


* 


-{1-H)S 





SEl 











* 


* 


-I 














* 


* 


* 


-I 











* 


* 


* 


* 


-s 








* 


* 


* 


* 


* 


" X ; 





* 


* 


* 


* 


* 


* 


-XI 



<0 



(28) 



124 



Robust Control, Theory and Applications 



where i * j, i, j e N , l F i = A { X { + B^^ + ( A Z X. + B^^) 1 , ® { = E lz X z + E li bA l Y i , then there 
exists the robust reliable state feedback controller 



u(t) = K a(t) x(t),K,=Y,X- 1 



(29) 



and the switching law is designed as a(t) = argmin{x T (£)X z 1 x(t)} , the closed-loop system is 
asymptotically stable. 

Proof From (5) and Theorem 1, we can obtain the sufficient condition of asymptotically 
stability for system (26) 



R<SI 



A- P^ + Hfi^E^ 



<i-m)Q 



<0 



(30) 
(31) 



and the switching law is designed as <j(t) = argmin{x (t)Pix(t)} , where 

4. = P,[A, + B t M t K t + H,F,(f)(E 1( + E 2 ,M,K,)] + [A + B t M t K t + H t F t (t)(E„ + E 2t M t K t )] T P t 
+ P j+ Q + SUjU, 

Denote 



Y„ 



P { (A, + BiH-K,- ) + (A + B.M.K, ) T P l + P ]+ Q + SUjU, P { A di 



(32) 



Then (31) can be written as 



Xw 







^•W[E li +E 2i M i X i E^ + [E lz+ E 2z M^ £*]'*?(*) 



^ 



< (33) 



By Lemma 3, if there exists a scalar £ > such that 



Y lj+ s 







mi 







-^- 1 [E li + E 2i M^ E, z ] i [E lz+ E 2z M^. E dl ]<0 (34) 



then (31) holds. 

(34) can also be written as 



where 



Hij PiAx+e-^Eu+EuMfcfE* 

-(l- M )Q + £ -%, T E dl 



<0 



(35) 



Il^^Ai+BMiKifPi+PiiAi+BMiKd + s^HiHjPi+s-^+E^l^Kf^+EiiMiKi) 
+ P l +Q + SU T i U i 



Robust H M Reliable Control of Uncertain Switched Nonlinear Systems with Time-varying Delay 125 



Vi cYix 



Using diag{s /2 ,s /2 } to pre- and post- multiply the left-hand term of expression (35) and 
denoting P { = eP i ,Q = eQ , we have 



n tj P i A ii+ {E li+ E 2i M i K i ) T E di 
-(l- M )Q + E dt T E dt 



<0 



(36) 



where 



n„ = (A, + B^KfP, + P^A, + B..M..K,.) + P,H,HjP, + (E li + E 2l M,K,) T (E v + E 2 ,.M,.K,.) 



By Lemma 1, (36) is equivalent to 



n (j p ( A di v x n x (E li + E 2i M f x f ; 

* -(1-//)Q E T di 

-I 



<0 



(37) 



where ri {] = (A i + B^Mfr f P i + P { (A i + B^M^ ) + Pj+Q + sSUjU t 

Using diag{Pj~ ,Q _1 ,1,1} to pre- and post- multiply the left-hand term of expression (37) and 

denoting X { = Pr 1 ,Y X = K^ 1 ,S = Q~ 1 ,X = (sS)' 1 , (37) can be written as 



~ n l 


A it S 


Hi 


(Ei 


x, 


+ E 2t M t Y t f 


* 


-(l-A)S 









SEl 


* 


* 


-I 









■k 


* 


* 






-I 



< (38) 



where I7-j = (A i X i + B^ftf + (A { + B i M i Y i ) + X t {Xf + S" 1 + X^UjU^ 

Using Lemma 1 again, (38) is equivalent to (28). Meanwhile, substituting X z = P[ ,P { - sP { 

and X = (sSy 1 to (30) yields (27). Then the switching law becomes 



cr(t) = aigmin{x T (t)X^x(t)} 



(39) 



Based on the above proof line, we know that if (27) and (28) holds, and the switching law is 
designed as (39), the state feedback controller u(t) = K a , t sx(t) , K x = YjX^ 1 can guarantee 
system (23) -(24) is asymptotically stable. The proof is completed. ■ 



3.3 Design of robust H*, reliable controller 

Consider system (l)-(3). By (7), for the i th subsystem the feedback control law can be 
designed as 



126 



Robust Control, Theory and Applications 



u f (t) = M i K i x(t) (40) 

Substituting (40) to (1) and (2), the corresponding closed-loop system can be written as 

(41) 
(42) 



x(t) = A { x(t) + A dl x(t - d(t)) + D { w(t) + /;• (x(t), t) 



z(t) = C i x(t) + N i w(t) 



where A f = A { + B^K^q = C { + QM^ , i e N . 



The following theorem presents a sufficient existing condition of the robust Hoo reliable 
controller for system (l)-(3). 

Theorem 3 For system (l)-(3), if there exists symmetric positive definite matrices X-,S, 
matrix Y t and the positive scalar X,s such that 



X; > XI 



(43) 



'5P5 D t - (C I -X I .+G t .M I .Y i ) T 



-ysl 



N 



A d ,S 


H z 


0/ 


x,- 


X; 


x t .u/ 






































(1-/0S 





s4 











* 


-I 














* 


* 


-I 











* 


* 


* 


-s 








* 


* 


* 


* 


" X ; 





* 


* 


* 


* 


* 


-XI 



<0 



(44) 



where i * j, i,jeN, x F i = A Z X Z + B^^ + (A Z X Z + B f M f Y;.) J , & t = E lz X z + E^M^ , then there 
exists the robust Hoo reliable state feedback controller 



u{t) = K a(t) x{t),K,=Y,Xf 



(45) 



and the switching law is designed as <j(t) = argmin{x T (^)X i ~ 1 x(^)} / the closed-loop system is 

z'eN 

asymptotically stable with disturbance attenuation performance y for all admissible 
uncertainties as well as all actuator faults. 
Proof By (44), we can obtain that 

W { A d ,S H, <Z> T 

* -(l-ju)S SE T d , 

* * -I 



x> 


x, 


x t ut 





























-s 








* 


- x ; 





* 


* 


-XI 



<0 



(46) 



Robust H M Reliable Control of Uncertain Switched Nonlinear Systems with Time-varying Delay 127 

From Theorem 2, we know that closed-loop system (41) is asymptotically stable. 
Define the following piecewise Lyapunov-Krasovskii functional candidate 



V{x(t)) = V l (x{t)) = x T {t)P l x{t) + \l d(t) x T (T)Qx{T)dr, te[t n , t n+1 ), n = 0,l,- 



(47) 



where P z , Q are symmetric positive definite matrices, and t = . Along the trajectories of 
system (41), the time derivative of V { (x(t)) is given by 



Vi(x(t))<{ 1 



4 P t D t P,(A d , + H,F,(t)E dt ) 


-0--m)Q 



* * 



(48) 



where 



Z = [x T (t) w T (t) x T (t-d(t))J , 

4 = P^[A { + B^Kt + H^(0(E lz + E2..M..JK,.)] + [A { + B Z M Z K Z + H^(0(E lz + E 2z M^)] T ^ 
+ P i +Q + SUjU i 
By simple computing, we can obtain that 

y- 1 z T (t)z(t)-yw T (t)w(t) 

y-\C x + G Z M^) T (Q + G f M f ig ^(Q + QM,^ ) T N, o" 

* * o 



(49) 



Denote X z = If \Y; =K z If \S = Q~\l^ =£^,Q = ^Q . Substituting them to (44), and using 
Lemma 1 and Lemma 3, through equivalent transform we have 



y-\C, + G ) .M,.K f ) T (C ) . + G..M..K,.) + 4 ^(C, + G.-M^,-) 1 ^ + ^D, P,.^ + H,F,(f)£ d ,) 

y^NfNi-yl 

-(1-//X2 

where 



<0 (50) 



4, = ^[A ; +5^^ +H i F i (t)(E li +E 2i M^ i )] + lA + B i M i K i +H i F i (t)(E li +£ 2! M i K i )] T P i 
+ P ; +Q + <yLZ, T U,. 

Obviously, under the switching law cx(r) = argmin{x r (r).F;x(r)} there is 



r _ V(0z(f) - rw T (t)w(f) + v;(x(0) < 



(51) 



Define 



} = ^(y- 1 z T (t)z(t)-yw T (t)w(t))dt 



(52) 



128 



Robust Control, Theory and Applications 



Consider switching signal 



;(°h 



;M 



< 2 h 



#h 



which means the v ' th subsystem is activated at t k . 
Combining (47), (51) and (52), for zero initial conditions, we have 

/ < f {j- x z\t)z(t) - 7 w T (t)w(t) + y, 0) (t))dt + f (^V (0z(0 - r ^ T (0^(0 + v. (1) (t))d* + • • • 
<o 

Therefore, we can obtain \\z(t)\\ 2 < y\\w(t)\\ 2 . The proof is completed. ■ 

When the actuator fault is taken into account in the controller design, we have the following 

theorem. 

Theorem 4 For system (l)-(3), y is a given positive scalar, if there exists symmetric positive 

definite matrices X Z ,S, matrix Y i and the positive scalar a,s,X such that 



X, > XI 



(53) 



-ysl Nj 



where i * j,i,j sN 



A dl S 


H t 




x* 


*i 


x t u; 


ViMioh 





























aGJfil 














(1-//)S 





SE T di 














* 


-I 

















* 


* 


** 














* 


* 


* 


-s 











* 


* 


* 


* 


" X ; 








* 


* 


* 


* 


* 


-XL 





* 


* 


* 


* 


* 


* 


-al 



<0 



(54) 



I, = A,X, + B,M, o y, + (A,X, + B,M, V;-) r + aBjtf 



ir 1( =c,x,4 

I 2i = E lj X i ■ 



^E 2t M, Y,+aE 2 J t Bj 



£ 



-L + aE 2i J { E 2i 



then there exists the y -suboptimal robust H^ reliable controller 

u(t) = K (7{t) x(t),K,=Y,X- 1 



(55) 



Robust H M Reliable Control of Uncertain Switched Nonlinear Systems with Time-varying Delay 



129 



and the switching law is designed as a(t) = argmin{x T (£)X z 1 x(t)} , the closed-loop system is 

asymptotically stable. 

Proof By Theorem 3, substituting (12) to (44) yields 



B,-AWi- 



(B,M, L,.Y ; ) 






(g^l^y 








(E 2l M l0 

















* 














* 


* 











* 


* 


* 








* 


* 


* 


* 















o" 










































































* 











* 


* 








* 


* 


* 






:0 ( 56 ) 



where 



W i0 D i (C,X,.+G,M, Y ; ) 



-ysl 



-U 



iPSo = A,X, + B,M l0 Y, + (A,X, + B,M i0 Y,) 

<H =E Ji 

Denote 



0, o =E v X 1+ E 2l M, o Y, 



US 


Hi 


<9o 


*i 


*i 


x t u{ 






































-fi)S 





SEl 











* 


-I 














* 


* 


-I 











* 


* 


* 


-s 








* 


* 


* 


* 


" X ; 





* 


* 


* 


* 


* 


-XI 



B,.M, L,.y,+(B,.M, L,.Y,.) 






(G^-oI,^ 








(EnM i0 

















•k 














* 


•k 











* 


* 


* 








* 


* 


* 


* 















o~ 










































































* 











* 


* 








* 


* 


* 






(57) 



130 



Robust Control, Theory and Applications 



Notice that M i0 and L z are both diagonal matrices, then we have 



L^M^ 0] + 





G { 




En 







From Lemma 4 and (12), we can obtain that 



L z [M z0 Y z 0000000 0] 



E:<a 



~V 




"V 


T 


y z t m z0 " 




~Y?M i0 ~ 




















G { 




Q 

































Ji 





+ a~ 1 





Ji 





E 2i 




E 2i 



































































Then the following inequality 



(58) 



~B;~ 




~B;~ 


T 


X T M i0 " 




~Y?M i0 ~ 




















Gi 




Gi 

































Ji 





+ CT 1 





Ji 





E 2i 




E 2i 



































































<0 



(59) 



can guarantee (56) holds. 

By Lemma 1, we know that (59) is equivalent to (54). The proof is completed. ■ 

Remark 3 (54) is not linear, because there exist unknown variables s , s~ x . Therefore, we 
consider utilizing variable substitute method to solve matrix inequality (54). Using 
diag{1 , 8~ x ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1} to pre- and post- multiply the left-hand term of expression (54), 
and denoting rj = e~ x , (54) can be transformed as the following linear matrix inequality 



Robust H M Reliable Control of Uncertain Switched Nonlinear Systems with Time-varying Delay 



131 



3 


rpi 




A di S 


^ 


^2i 


X; 


*i 


x { uj 


Y?M i0 J% 


* 


-rjyl 


inJ 























* 


k 


z* 








aGJfili 














* 


k 


* 


-(1- M )S 





SE T di 














* 


k 


* 


k 


-I 

















* 


k 


* 


k 


k 


Z±i 














k 


k 


k 


k 


k 


k 


-s 











k 


k 


k 


k 


k 


k 


* 


" X ; 








k 


k 


k 


k 


k 


k 


* 


* 


-XI 





k 


k 


k 


k 


k 


k 


* 


* 


k 


-al 



<0 



(60) 



where 27 3i = -rjyl + aG i ] i G i 

Corollary 1 For system (l)-(3), if the following optimal problem 



mm y 

X- >0, S>0, a>0, s>0, A>0,Y { 



(61) 



s.t. (53) and (54) 

has feasible solution X { > 0,S > 0,a > 0,s > 0,X > 0,Y if i e N , then there exists the y -optimal 

robust Hoc reliable controller 



u(t) = K a(t) x(t),K,=Y,X- 1 



(62) 



and the switching law is designed as a(t) = argmin{x T (£)X z 1 x(t)} , the closed-loop system is 
asymptotically stable. 

Remark 4 There exist unknown variables ye , ys~ x in (54), so it is difficult to solve the 
above optimal problem. We denote 6 = ys , % = ys~ x , and substitute them to (54), then (54) 

becomes a set of linear matrix inequalities. Notice that y < %. , we can solve the following 

2 
the optimal problem to obtain the minimal upper bound of y 



s.t. 









Xi>0,S> 


mm 

O,a>O,0>O,z>O,A 


>0,Yi 


2 






I { 


D t 




A d ,S 


H { 


z 2i 


x. 


x. 


X i U I 


Y?M i0J 


k 


-61 


Nj 























k 


k 


Zn 








aGJfil 














k 


k 


k 


-(1-M)S 





SE T di 














k 


k 


k 


k 


-I 

















k 


k 


k 


k 


k 


*u 














k 


k 


k 


k 


k 


k 


-s 











k 


k 


k 


k 


k 


k 


k 


" X ; 








k 


k 


k 


k 


k 


k 


k 


* 


-XI 





k 


k 


k 


k 


k 


k 


k 


* 


k 


-al 



<0 



(63) 



(64) 



132 



Robust Control, Theory and Applications 



X, > XL 



(65) 



where Z 3i = -%L + aG i ] i G i , then the minimal value of y can be acquired based on the 
following steps 

Step 1. From (63)-(65), we solve the minimal value y^ of — , where y^ is the first 

iterative value; 
Step 2. Choosing an appropriate step size 8 = S , and let y^ -y^ -S , then we substitute 
Step 3. y^ 1 ' to (60) to solve LMIs. If there is not feasible solution, stop iterating and y^ ' is 

just the optimal performance index; Otherwise, continue iterating until y^ ' is 

feasible solution but y^ +1 ' is not, then y = y( ' - kS is the optimal performance 

index. 

4. Numerical example 

In this section, an example is given to illustrate the effectiveness of the proposed method. 
Consider system (l)-(3) with parameter as follows (the number of subsystems N = 2) 



~-2 0" 

2 


, A 2 = 


~1 -3" 
-2 


/ A*i = 


"-5 
- 


0" 
-4 


> A d2 


= 


"-3 



-1" 

-6 


, B 1 = 


"-5 7" 
-9 


,B 2 = 


"-8 2" 
2 6 


"2 5" 



, E 12 = 


~1 2" 
4 


, E 21 = 


"2 0" 
3 1 


, E 22 = 


" 2 0" 
0.2 


/ E ^i = 


-1 " 
1 0.1 


' ^dl = 


"2 0" 
1 



C a =[-0.8 0.5],C 2 =[0.3 -0.8] , G a = [0.1 0],G 2 =[-0.1 0],D a : 



2 -1 
-4 



D 7 



3 -6 

-5 12 



, H 1 =H 2 



0.1 0.2 
0.1 



, N 1 =N 2 = [0.01 0] 



The time-varying delay d(t) = 0.5e f , the initial condition x(t) = [2 -l] , t e [-0.5,0] , 

Tsinf 
uncertain parameter matrices F a (f) = F 2 (f) = 



sinf 



, / = 0.8 and nonlinear functions is 



selected as 



fMtyt)-- 



x a (f)cos(f) 




f 2 (x(t),t): 





x 2 (t)cos(t) 



then U a = 



1 




,U 2 = 




1 



When M 1 = M 2 = I , from Theorem 3 and using LMI toolbox in Matlab, we have 



X 1 



0.6208 0.0909 
0.0909 0.1061 



X 9 



0.2504 0.1142 
0.1142 0.9561 



*i = 



0.6863 0.5839 
-3.2062 -0.3088 



0.8584 -1.3442 
-0.5699 -5.5768 



Robust H M Reliable Control of Uncertain Switched Nonlinear Systems with Time-varying Delay 133 



'0.1123 0.0370" 
0.0370 0.1072 



, s = 19.5408, 1 = 0.0719 



Then robust Hao controller can be designed as 

0.3426 5.2108' 
-5.4176 1.7297 



,K 2 



4.3032 -1.9197" 
0.4056 -5.8812 



Choosing the switching law a(t) = argmin{x T (f)X z 1 x(t)} , the switching domain is 

z'eN 

f\ = {x(t) e R 2 \x T (t)X^x(t) < x T (t)X^x(t)} 
0, = {x(t) e R 2 \x T (t)X^x(t) > x T {t)X^x{t)} 



O l ={x(t)eR 2 \x T (t) 



n l ={x(t)eR 2 \x T (t) 



The switching law is 



a(t)- 



-2.3815 -1.0734 

-1.0734 9.6717 

-2.3815 -1.0734" 

-1.0734 9.6717 



1 x(t) g r\ 



x(t) < 0} 
x(t) > 0} 



[2 x(t)eQ 
The state responses of the closed-loop system are shown in Fig. 1. 



1 L 1 1 L 1 J L 1 J 

1 1 1 1 1 1 1 1 1 

1 1 1 1 1 1 1 



x1 I i i 

* \ \ \ \ \ \ \ \ 

\ h 1 I h I \ \ I \ 

I ! ! ! ! ! ! ! ! 




/\ i y i i i i i i 


/ ! ! ! ! ! ! ! ! 

k 

— \ — r 1 i r 1 i x 1 1 

x2 



o o.2 0.4 o.e 



1 1.2 1.4 1.( 

t/s 



Fig. 1. State responses of the closed-loop system with the normal switched controller when 
the actuator is normal 



134 



Robust Control, Theory and Applications 



The Fig. 1 illustrates that the designed normal switched controller can guarantee system is 

asymptotically stable when the actuator is normal. 

However, in fact, the actuator fault can not be avoided. Here, we assume that the actuator 

fault model with parameters as follows 

For subsystem 1 



For subsystem 2 



Then we have 



0.04<ra n <l, 0.1<m 12 <1.2 
0.1 < m 21 < 1 , 0.04 < m 22 < 1 



M 



10 ' 



M 20 = 



0.52 

0.65 

0.55 

0.52 



h 



h = 



0.92 








0.85 


0.82 








0.92 



Choosing the fault matrices of subsystem 1 and subsystem 2 are 



M, 



0.04 



" 
0.1 


, M 2 = 



0.1 
0.04 



Then the above switched controller still be used to stabilize the system, the simulation result 
of the state responses of closed-loop switched system is shown in Fig. 2. 



ra 





X1I I I I I I I I 

\y 

i 1- 1 1 i ^ \ 1 i i 

x2 i i i i i i i 

A I I I I I I II 

^ 

' T 1 1 I I 1 1 I I 

! ! ! ! ! ! ! ! 


if ; ; ; v " ; ; ; "\ ; 

ft f ~ T ~ ~ 1 — ! ~ ~ " — ! " " "1 — ! — ! — [~ ~ " 
\\ ! ! ! ! ! ! ! ! 





10 1 

t/s 



2 14 16 18 20 



Fig. 2. State responses of the closed-loop system with the normal switched controller when 
the actuator is failed 

Obviously, it can be seen that system state occurs vibration and the system can not be 
stabilized effectively. 

The simulation comparisons of Fig. 1 and Fig. 2 shows that the design method for normal 
switched controller may lose efficacy when the actuator is failed. 



Robust H M Reliable Control of Uncertain Switched Nonlinear Systems with Time-varying Delay 



135 



Then for the above fault model, by Theorem 3 and using LMI toolbox in Matlab, we have 



X, 



Yi = 



0.0180 0.0085 

0.0085 0.0123 

0.4784 0.6606 " 

-0.5231 -0.0119 



X 9 



0.0436 -0.0007 

-0.0007 0.0045 

"0.7036 -0.1808' 

-0.1737 -0.5212 



0.0130 0.0000 
0.0000 0.0012 



, a = 0.0416, * = 946.1561, 1 = 0.0036 



Then robust H x reliable controller can be designed as 



1.8533 52.3540' 
-42.5190 28.3767 



, K 9 



15.5242 -37.5339 
-5.8387 -116.0295 



Choosing the switching law as 



a(t)- 



1 x(t)^£\ 

2 x(t)eQ 



n l ={x(t)eR 2 \x T (t) 



1 ={x(t)eR 2 \x T (t) 



where 

., m T 59.6095 -60.5390" 

x(t)<0} 

x(t)>0} 
The state responses of the closed-loop system are shown in Fig. 3. 



59.6095 -60.5390 

-60.5390 -100.9210 

59.6095 -60.5390 

-60.5390 -100.9210 



0-/-V- A - - -^^: 



0.2 0.4 0.6 0.8 1 1.2 1.4 1.< 

t/s 



Fig. 3. State responses of the closed-loop system with the reliable switched controller when 
the actuator is failed 



136 



Robust Control, Theory and Applications 



It can be seen that the designed robust Hoo reliable controller makes the closed-loop switched 

system is asymptotically stable for admissible uncertain parameter and actuator fault. 

The simulation of Fig. 3 also shows that the design method of robust Hoo reliable controller 

can overcome the effect of time-varying delay for switched system. 

Moreover, by Corollary 1, based on the solving process of Remark 4 we can obtain the 

optimal Hoo disturbance attenuation performance y = 0.54 , the optimal robust Hoo reliable 

controller can be designed as 



K 1 = 



9.7714 115.4893 
-69.8769 41.1641 



,K 2 = 



9.9212 
-62.1507 



-106.5624 
-608.0198 



The parameter matrices X 1 , X 2 of the switching law are 
X-i = , X 2 



0.0031 0.0011 
0.0011 0.0018 



0.0119 -0.0011 
-0.0011 0.0004 



5. Conclusion 

In order to overcome the passive effect of time-varying delay for switched systems and 
make systems be anti-jamming and fault-tolerant, robust Hoo reliable control for a class of 
uncertain switched systems with actuator faults and time- varying delays is investigated. At 
first, the concept of robust reliable controller, y -suboptimal robust Hoo reliable controller 
and y -optimal robust Hoo reliable controller are presented. Secondly, fault model of actuator 
for switched systems is put forward. Multiple Lyapunov-Krasovskii functional method and 
linear matrix inequality technique are adopted to design robust Hoo reliable controller. The 
matrix inequalities in the y -optimal problem are not linear, then we make use of variable 
substitute method to acquire the controller gain matrices. Furthermore, the iteration solving 
process of optimal disturbance attenuation performance y is presented. Finally, a numerical 
example shows the effectiveness of the proposed method. The result illustrates that the 
designed controller can stabilize the original system and makes it be of Hoo disturbance 
attenuation performance when the system has uncertain parameters and actuator faults. Our 
future work will focus on constructing the appropriate multiply Lyapunov-Krasovskii 
functional to obtain the designed method of time delay dependent robust Hoo reliable 
controller. 

6. Acknowledgment 

The authors are very grateful to the reviewers and to the editors for their helpful comments 
and suggestions on this paper. This work was supported by the Natural Science Foundation 
of China under Grant No. 60974027. 



7. References 

Boyd, S. P.; Ghaoui, L. E. & Feron, et al. (1994). Linear matrix inequalities in system and control 
theory. SIAM. 



Robust H M Reliable Control of Uncertain Switched Nonlinear Systems with Time-varying Delay 137 

Cao, Y.; Sun, Y. & Cheng, C. (1998). Delay dependent robust stabilization of uncertain 

systems with multiple state delays. IEEE Transactions on Automatic Control, Vol. 43, 

No. 11, 1608-1612. 
Cong, S.; Fei, S. M. & Li, T. (2007). Exponential stability analysis for the switched system 

with time delay: multiple Lyapunov functions approach, Acta Automatica Sinica, 

2007, Vol. 33, No. 9, 985-988 (in Chinese) 
Kim, S.; Campbell, S. A. & Liu, X. (2006). Stability of a class of linear switching systems with 

time delay. IEEE Transactions on Circuits & Systems-I, Vol. 53, No. 2, 384-393. 
Lien, C. H. (2007). Non-fragile guaranteed cost control for uncertain neutral dynamic 

systems with time-varying delays in state and control input. Chaos, Solitons & 

Fractals, Vol. 31, No. 4, 889-899. 
Peleties, P. & DeCarlo, R. A. (1991). Asymptotic stability of m-switched systems using 

Lypaunov-like functions. Proceedings of American Control Conference, 1991, 1679- 

1684. 
Pettersson, S. & Lennartson, B. (2002). Hybrid system stability and robustness verification 

using linear matrix inequalities. International Journal of Control, Vol. 75, No. 16-17, 

1335-1355. 
Sun, Z. D. & Ge, S. S. (2005). Analysis and synthesis of switched linear control systems. 

Automatica, Vol. 41, No. 2, 181-195. 
Song, Y.; Fan, J. & Fei, M. R., et al. (2008). Robust H*, control of discrete switched 

system with time delay. Applied Mathematics and Computation, Vol. 205, No. 1, 

159-169. 
Sun, W. A. & Zhao, J. (2005). Hoo robust control of uncertain switched linear systems based 

on LMIs. Control and Decision, Vol. 20, No. 6, 650-655. (in Chinese) 
Song, Z. Y.; Nie, H. & Zhao, J. (2007). Robust H^ control for discrete switched systems with 

time-varying delays. Journal of Northeastern University (Natural Science), Vol. 28, No. 

4, 469-472. (in Chinese) 
Tomlin, C; Pappas, G. J. & Sastry, S. (1998). Conflict resolution for air traffic management: a 

study in multiagent hybrid systems. IEEE Transactions on Automatic Control, Vol. 43, 

No. 4, 509-521. 
Varaiya, P. (1993). Smart cars on smart roads: problems of control. IEEE Transactions on 

Automatic Control, Vol. 38, No. 2, 195-207. 
Wang, W. & Brockett, R. W. (1997). Systems with finite communication bandwidth 

constraints-part I: State estimation problems. IEEE Transactions on Automatic 

Control, Vol. 42, No. 9, 1294-1299. 
Wang, R.; Liu, J. C. & Zhao, J. (2006). Reliable guaranteed-cost control for a class of 

uncertain switched linear systems with time-delay. Control Theory and Applications, 

Vol. 23, No. 6, 1001-1004. (in Chinese) 
Xiang, Z. R. & Wang, R. H. (2008). Robust reliable control for uncertain switched nonlinear 

systems with time delay. Proceedings of the 7th World Congress on Intelligent Control 

and Automation, pp. 5487-5491. 
Xiang, Z. R. & Wang, R. H. (2009). Robust Loo reliable control for uncertain nonlinear 

switched systems with time delay. Applied Mathematics and Computation, Vol. 210, 

No. 1, 202-210. 



138 Robust Control, Theory and Applications 

Zhang, Y.; Liu, X. Z. & Shen, X. M. (2007). Stability of switched systems with time delay. 
Nonlinear Analysis: Hybrid Systems, Vol. 1, No. 1, 44-58. 



Part 3 
Sliding Mode Control 



Optimal Sliding Mode Control for a 

Class of Uncertain Nonlinear Systems 

Based on Feedback Linearization 

Hai-Ping Pang and Qing Yang 

Qingdao University of Science and Technology 

China 



1. Introduction 

Optimal control is one of the most important branches in modern control theory, and linear 
quadratic regulator (LQR) has been well used and developed in linear control systems. 
However, there would be several problems in employing LQR to uncertain nonlinear 
systems. The optimal LQR problem for nonlinear systems often leads to solving a nonlinear 
two-point boundary-value (TPBV) problem (Tang et al. 2008; Pang et al. 2009) and an 
analytical solution generally does not exist except some simplest cases (Tang & Gao, 2005). 
Additionally, the optimal controller design is usually based on the precise mathematical 
models. While if the controlled system is subject to some uncertainties, such as parameter 
variations, unmodeled dynamics and external disturbances, the performance criterion which 
is optimized based on the nominal system would deviate from the optimal value, even the 
system becomes unstable (Gao & Hung, 1993 ; Pang & Wang, 2009). 

The main control strategies to deal with the optimal control problems of nonlinear systems 
are as follows. First, obtain approximate solution of optimal control problems by iteration or 
recursion, such as successive approximate approach (Tang, 2005), SDRE (Shamma & 
Cloutier, 2001), ASRE (Cimen & Banks, 2004). These methods could have direct results but 
usually complex and difficult to be realized. Second, transform the nonlinear system into a 
linear one by the approximate linearization (i.e. Jacobian linearization), then optimal control 
can be realized easily based on the transformed system. But the main problem of this 
method is that the transformation is only applicable to those systems with less nonlinearity 
and operating in a very small neighborhood of equilibrium points. Third, transform the 
nonlinear system into a linear one by the exact linearization technique (Mokhtari et al. 2006; 
Pang & Chen, 2009). This differs entirely from approximate linearization in that the 
approximate linearization is often done simply by neglecting any term of order higher than 
1 in the dynamics while exact linearization is achieved by exact state transformations and 
feedback. 

As a precise and robust algorithm, the sliding mode control (SMC) (Yang & Ozgiiner, 1997; 
Choi et al. 1993; Choi et al. 1994) has attracted a great deal of attention to the uncertain 
nonlinear system control problems. Its outstanding advantage is that the sliding motion 
exhibits complete robustness to system uncertainties. In this chapter, combining LQR and 
SMC, the design of global robust optimal sliding mode controller (GROSMC) is concerned. 
Firstly, the GROSMC is designed for a class of uncertain linear systems. And then, a class of 



142 Robust Control, Theory and Applications 

affine nonlinear systems is considered. The exact linearization technique is adopted to 
transform the nonlinear system into an equivalent linear one and a GROSMC is designed 
based on the transformed system. Lastly, the global robust optimal sliding mode tracking 
controller is studied for a class of uncertain affine nonlinear systems. Simulation results 
illustrate the effectiveness of the proposed methods. 

2. Optimal sliding mode control for uncertain linear system 

In this section, the problem of robustify LQR for a class of uncertain linear systems is 
considered. An optimal controller is designed for the nominal system and an integral sliding 
surface (Lee, 2006; Laghrouche et al. 2007) is constructed. The ideal sliding motion can 
minimize a given quadratic performance index, and the reaching phase, which is inherent in 
conventional sliding mode control, is completely eliminated (Basin et al. 2007). Then the 
sliding mode control law is synthesized to guarantee the reachability of the specified sliding 
surface. The system dynamics is global robust to uncertainties which satisfy matching 
conditions. A GROSMC is realized. To verify the effectiveness of the proposed scheme, a 
robust optimal sliding mode controller is developed for rotor position control of an electrical 
servo drive system. 

2.1 System description and problem formulation 

Consider an uncertain linear system described by 

x(t) = (A + AA)x(t) + (B + AB)u(t) + S(x,t) (1) 

where x(t) e R n and u(t) e R m are the state and the control vectors, respectively. AA and 

AB are unknown time-varying matrices representing system parameter uncertainties. 

8{x,t) is an uncertain extraneous disturbance and/ or unknown nonlinearity of the system. 

Assumption 1. The pair (A,B) is controllable and rank(B) = m . 

Assumption 2. AA , AB and 8{x,t) are continuously differentiable inx, and piecewise 

continuous in t . 

Assumption 3. There exist unknown continuous functions of appropriate dimension AA , 

AB and 8(x, t) , such that 

AA = BAA, AB = BAB, 8{x,t) = BS(x,t). 

These conditions are the so-called matching conditions. 

From these assumptions, the state equation of the uncertain dynamic system (1) can be 

rewritten as 

x(t ) = Ax(t) + Bu(t) + BS(x, t), (2) 

where 
Assumption 4. There exist unknown positive constants y and y 1 such that 



\S{x f t)\- 



; 7o+riR 



where |«| denotes the Euclidean norm. 

By setting the uncertainty to zero, we can obtain the dynamic equation of the original 

system of (1), as 



Optimal Sliding Mode Control for a Class of 

Uncertain Nonlinear Systems Based on Feedback Linearization 143 

x(t) = Ax(t) + Bu(t). (3) 

For the nominal system (3), let's define a quadratic performance index as follows: 

7o = |j V( W) + u\t)Ru{t)]dt , (4) 

where Q e R nxn is a semi-positive definite matrix, the weighting function of states; 
R e R mxm is a positive definite matrix, the weighting function of control variables. 
According to optimal control theory and considering Assumptionl, there exists an optimal 
feedback control law that minimizes the index (4). The optimal control law can be written as 

u\t) = -R- 1 B T Px(t), (5) 

where P e R nxn is a positive definite matrix solution of Riccati matrix equation: 

-PA - A T P + PBR-^P - Q = 0. (6) 

So the dynamic equation of the closed-loop system is 

x(t) = (A-BRr 1 B T P)x(t). (7) 

Obviously, according to optimal control theory, the closed-loop system is asymptotically 
stable. However, when the system is subjected to uncertainties such as external disturbances 
and parameter variations, the optimal system behavior could be deteriorated, even unstable. 
In the next part, we will utilize sliding mode control strategy to robustify the optimal control 
law. 

2.2 Design of optimal sliding mode controller 
2.2.1 Design of optimal sliding mode surface 

Considering the uncertain system (2), we chose the integral sliding surface as follows: 

s(x, t) = G[x(t) - x(0)] - G J' (A - BR~ 1 B T P)x(T)dT = 0. (8) 

where G e R mxn t which satisfies that GB is nonsingular, x(0) is the initial state vector. In 
sliding mode, we have s(x,t) = and s(x,t) = . Differentiating (8) with respect to t and 
considering (1), we obtain 

s = G[(A + AA)x + (B + AB)u + S]- G(A - BR~ 1 B T P)x 
= GAAx + G(B + AB)u + GS + GBR~ 1 B T Px (9) 

= G(AAx + BR-VPx) + GS + G(B + AB)u 
the equivalent control becomes 



u 



eq 



[G(B + AB)]~ 1 [G(AA + BR~ 1 B T P)x + GS]. (10) 



Substituting (10) into (1) and considering Assumption3, the ideal sliding mode dynamics 
becomes 



144 Robust Control, Theory and Applications 

x = (A + AA)x - (B + AB)[G(B + AB)]~ 1 [G(AA + BR~ 1 B T P)x + G8] + 8 

= (A + BAA)x - (B + BAB)[G(B + BAB)] _1 [GB(AAx + 8) + GBR _1 B T Px] + B£ (11) 

= (A-BR~ 1 B T P)x 

Comparing equation (11) with equation (7), we can see that they have the same form. So the 
sliding mode is asymptotically stable. Furthermore, it can be seen from (11) that the sliding 
mode is robust to uncertainties which satisfying matching conditions. So we call (8) a robust 
optimal sliding surface. 

2.2.2 Design of sliding mode control law 

To ensure the reachability of sliding mode in finite time, we chose the sliding mode control 
law as follows: 

u(t) = u c (t) + u d (t), 

u c (t) = -R- 1 B T Px(t), (12) 

u d (t) = -(GBy 1 (rj + ro \\GB\\ + ri \\GB\\ \\x(t)\\) sgn(s). 

Where/7>0, u c (t) is the continuous part, used to stabilize and optimize the nominal 
system; u d (t) is the discontinuous part, which provides complete compensation for 
uncertainties of system (1). Let's select a quadratic performance index as follows: 

lit) = \\l[x T (t)Qx(t) + u T c (t)Ru c (t)\iL (13) 

where the meanings of Q and R are as the same as that in (4). 

Theorem 1. Consider uncertain linear system (1) with Assumptions 1-4. Let u and sliding 

surface be given by (12) and (8), respectively. The control law (12) can force the system 

trajectories with arbitrarily given initial conditions to reach the sliding surface in finite time 

and maintain on it thereafter. 

Proof. Choosing V = (l/2)s T s as a lyapunov function, and differentiating this function with 

respect to t and considering Assumptions 1-4, we have 

V = s T s 
= s T {G[(A + AA)x + (B + AB)u + 8]- G(A - BR~ 1 B T P)x} 
= s T {GBu + GB8 + GBR'VPx} 

= s T {-GBR~ 1 B T Px -(rj + ro \\GB\\ + y 1 ||GB||x|)sgn(s) + GB8 + GBR _1 B t Pjc} 
= s T {-(n + r \\GB\\ + n \\GB\\ \\x\\) sgn(s) + GB8} 

= ~#li "(^ol GB l + nl GB IIMI)INIi +sTgb $ 

* -#li - (ft \pB\\ + n |gs||MI)INIi + fro \pB\\ + n |gb||x|)|s| 

where |»| denotes 1-norm. Noting the fact that |s|L > |s|, we get 

V = s T s<-r]\\s\\ (14) 

This implies that the sliding mode control law we have chosen according to (12) could 
ensure the trajectories which start from arbitrarily given points be driven onto the sliding 



Optimal Sliding Mode Control for a Class of 

Uncertain Nonlinear Systems Based on Feedback Linearization 



145 



surface (8) in finite time and would not leave it thereafter despite uncertainties. The proof is 

complete. 

Conclusion 1. The uncertain system (1) with the integral sliding surface (8) and the control 

law (12) achieves global sliding mode, and the performace index (13) is minimized. So the 

system designed is global robust and optimal. 

2.3 Application to electrical servo drive 

The speed and position electrical servo drive systems are widely used in engineering 
systems, such as CNC machines, industrial robots, winding machines and etc. The main 
properties required for servo systems include high tracking behavior, no overshoot, no 
oscillation, quick response and good robustness. 

In general, with the implementation of field-oriented control, the mechanical equation of an 
induction motor drive or a permanent synchronous motor drive can be described as 



J m 0(t) + B m 0(t) + T d =T e 



(15) 



where 6 is the rotor position; J m is the moment of inertia; B m is the damping coefficient; T d 
denotes the external load disturbance, nonlinear friction and unpredicted uncertainties; T e 
represents the electric torque which defined as 



-K t i 



(16) 



where K t is the torque constant and i is the torque current command. 

Define the position tracking error e(t) = d (t) - 6{t) , where d (t) denotes the desired 
position, and let x a (f) = e(t) , x 2 (t) = x 1 (t) , u = i , then the error state equation of an electrical 
servo drive can be described as 



1 







J m 



U + 



(17) 



_*2_ 


= 


"0 1 
-**- 

Jm 


_ X 2_ 


+ 


"0 

Jm _ 


U + 


" " 
1 

_Jm _ 



Supposing the desired position is a step signal, the error state equation can be simplified as 



(18) 



The parameters of the servo drive model in the nominal condition with T d = ONm are (Lin 
& Chou, 2003): 

/ = 5.77 xlO" 2 Nms 2 , 
B = 8.8xlO" 3 Nms/rad, 
K t = 0.667 Nm/A. 

The initial condition is x(0) = [l 0] . To investigate the effectiveness of the proposed 
controller, two cases with parameter variations in the electrical servo drive and load torque 
disturbance are considered here. 
Case 1: /,„ = /,„ , B m = B m , T d = l(t - 10) Nm - l(f - 13)Nm . 



146 



Robust Control, Theory and Applications 



Case 2: J m =3J m , B n 



B n 



= 0. 



The optimal controller and the optimal robust SMC are designed, respectively, for both 
cases. The optimal controller is based on the nominal system with a quadratic performance 
index (4). Here 



Q- 



l o' 

1 



R = l 



In Case 1, the simulation results by different controllers are shown in Fig. 1. It is seen that 
when there is no disturbance ( t < 10s ), both systems have almost the same performance. 



1 


1 


0.8 

o 

I 0.6 

04 


/\ V - / "" 


/ 1 T 








— Robust Optimal SMC 
-— Optimal Control 




0.2 
n 








! 



5 10 15 

time(sec) 

(a) Position responses 



0.6 




_ 






0.5 


r 


C 0A 


7 


0.3 




0.2 








1 


I 


Robust Optimal SMC 

Optimal Control 




n 









5 10 15 20 

time(sec) 

(b) Performance indexes 

Fig. 1. Simulation results in Case 1 

But when a load torque disturbance occurs at f = (10 ~ 13)s , the position trajectory of 
optimal control system deviates from the desired value, nevertheless the position trajectory 
of the robust optimal SMC system is almost not affected. 

In Case 2, the simulation results by different controllers are given in Fig.2. It is seen that the 
robust optimal SMC system is insensitive to the parameter uncertainty, the position 
trajectory is almost as the same as that of the nominal system. However, the optimal control 



Optimal Sliding Mode Control for a Class of 

Uncertain Nonlinear Systems Based on Feedback Linearization 



147 



system is affected by the parameter variation. Compared with the nominal system, the 
position trajectory is different, bigger overshoot and the relative stability degrades. 
In summery, the robust optimal SMC system owns the optimal performance and global 
robustness to uncertainties. 



0.8 

S 

1 0.6 



0.4 



0.2 








^ 






, ( 






ji 

I i 
if 
























r 


r 







Robust Optimal SMC 
Optimal Control 




Jr 
1/ 

* 









5 10 

time(sec) 

(a) Position responses 



15 



20 




Robust Optimal SMC 

Optimal Control 



10 
time(sec) 



15 



(b) Performance indexes 



Fig. 2. Simulation results in Case 2 



2.4 Conclusion 

In this section, the integral sliding mode control strategy is applied to robustifying the 
optimal controller. An optimal robust sliding surface is designed so that the initial condition 
is on the surface and reaching phase is eliminated. The system is global robust to 
uncertainties which satisfy matching conditions and the sliding motion minimizes the given 
quadratic performance index. This method has been adopted to control the rotor position of 
an electrical servo drive. Simulation results show that the robust optimal SMCs are superior 
to optimal LQR controllers in the robustness to parameter variations and external 
disturbances. 



148 Robust Control, Theory and Applications 

3. Optimal sliding mode control for uncertain nonlinear system 

In the section above, the robust optimal SMC design problem for a class of uncertain linear 
systems is studied. However, nearly all practical systems contain nonlinearities, there would 
exist some difficulties if optimal control is applied to handling nonlinear problems (Chiou & 
Huang, 2005; Ho, 2007, Cimen & Banks, 2004; Tang et al, 2007). In this section, the global 
robust optimal sliding mode controller (GROSMC) is designed based on feedback 
linearization for a class of MIMO uncertain nonlinear system. 

3.1 Problem formulation 

Consider an uncertain affine nonlinear system in the form of 

x = f(x) + g(x)u + d(t,x), 

y = H(x), [ ] 

where x e R n is the state, u e R m is the control input, and f(x) and g(x) are sufficiently 
smooth vector fields on a domain D czR n .Moreover, state vector x is assumed available, 
H(x) is a measured sufficiently smooth output function and H(x) = (h lf --- ,h m ) T • d(i,x) is an 
unknown function vector, which represents the system uncertainties, including system 
parameter variations, unmodeled dynamics and external disturbances. 
Assumption 5. There exists an unknown continuous function vector 8{t,x) such that d(t,x) 
can be written as 

d(t,x) = g(x)S(t,x). 

This is called matching condition. 

Assumption 6. There exist positive constants y and y x , such that 

\\S(t,x)\\<y + ri \\x\\ 

where the notation ||| denotes the usual Euclidean norm. 

By setting all the uncertainties to zero, the nominal system of the uncertain system (19) can 

be described as 

x = f(x) + g(x)u , 

y = H(x). { } 

The objective of this paper is to synthesize a robust sliding mode optimal controller so that 
the uncertain affine nonlinear system has not only the optimal performance of the nominal 
system but also robustness to the system uncertainties. However, the nominal system is 
nonlinear. To avoid the nonlinear TPBV problem and approximate linearization problem, 
we adopt the feedback linearization to transform the uncertain nonlinear system (19) into an 
equivalent linear one and an optimal controller is designed on it, then a GROSMC is 
proposed. 

3.2 Feedback linearization 

Feedback linearization is an important approach to nonlinear control design. The central 
idea of this approach is to find a state transformation z = T(x) and an input transformation 



Optimal Sliding Mode Control for a Class of 

Uncertain Nonlinear Systems Based on Feedback Linearization 



149 



u = u(x,v) so that the nonlinear system dynamics is transformed into an equivalent linear 

time-variant dynamics, in the familiar form z = Az + Bv , then linear control techniques can 

be applied. 

Assume that system (20) has the vector relative degree \j\,-'J m 

According to relative degree definition, we have 



and r-i 



Vi 



(*). 



]} f h if 0<fc<r f -l 



and the decoupled matrix 






LAL}-\) - L.(Uy\) 



(21) 



E(x) = (e, 



h^f'K) 



L g„( L /"X) 



is nonsingular in some domain Vx e X . 

Choose state and input transformations as follows: 



zi=T i j (x) = L j f h i ,i = lr-,m;j = 0,lr-,r i -l 



(22) 
(23) 



u = E~ 1 (x)[v-K(x)], 

where K(x) = (Iljh^,--- ,L r ?/z m ) T , v is an equivalent input to be designed later. The uncertain 
nonlinear system (19) can be transformed into m subsystems; each one is in the form of 



(24) 



So system (19) can be transformed into the following equivalent model of a simple linear 
form: 



10- 


• 


1- 


• 


0- 


• 1 


0- 


• 



r o 




roi 







Z; 










1 












*i 


+ 





v { + 





Si- 1 










V z i 




1 




Vf\ 



z(t) = Az(t) + Bv(t) + co(t,z) 



(25) 



where zeR n , v g R m are new state vector and input, respectively. A e R nxn and B e R nxm 
are constant matrixes, and ( A, B) are controllable. &>(£,z)eR n is the uncertainties of the 
equivalent linear system. As we can see, co(t,z) also satisfies the matching condition, in 
other words there exists an unknown continuous vector function cb(t,z) such that 
co(t,z) = Bo>(t,z) . 



1 50 Robust Control, Theory and Applications 

3.3 Design of GROSMC 

3.3.1 Optimal control for nominal system 

The nominal system of (25) is 

z(t) = Az(t) + Bv(t). (26) 

For (26), let v = v and v can minimize a quadratic performance index as follows: 

/ = i{ o V(f)Qz« + v T (t)Rv (t)]dt (27) 

where Q e R nxn is a symmetric positive definite matrix, R e R mxm is a positive definite 
matrix. According to optimal control theory, the optimal feedback control law can be 
described as 

v (t) = -R- 1 B T Pz(t) (28) 

with P the solution of the matrix Riccati equation 



PA + A T P - PBR^B 7 ? + Q = 0. (29) 



So the closed-loop dynamics is 



z(t) = (A-BR- 1 B T P)z(t). (30) 



The closed-loop system is asymptotically stable. 

The solution to equation (30) is the optimal trajectory z*(t) of the nominal system with 
optimal control law (28). However, if the control law (28) is applied to uncertain system (25), 
the system state trajectory will deviate from the optimal trajectory and even the system 
becomes unstable. Next we will introduce integral sliding mode control technique to 
robustify the optimal control law, to achieve the goal that the state trajectory of uncertain 
system (25) is the same as that of the optimal trajectory of the nominal system (26). 

3.3.2 The optimal sliding surface 

Considering the uncertain system (25) and the optimal control law (28), we define an 
integral sliding surface in the form of 

s(t) = G[z(t) - z(0)] - Gj*(A - BR- 1 B T P)z(r)dr (31) 

where G e R mxn f which satisfies that GB is nonsingular, z(0) is the initial state vector. 
Differentiating (31) with respect to t and considering (25), we obtain 

s(t) = Gz(t) - G(A - BR~ 1 B T P)z(t) 

= G[Az(t) + Bv(t) + (o(i, z)\ - G(A - BR~ 1 B T P)z(t) (32) 

= GBv(t) + GBR-^Pzit) + Gco(i,z) 

Let s(t ) = , the equivalent control becomes 



Optimal Sliding Mode Control for a Class of 

Uncertain Nonlinear Systems Based on Feedback Linearization 



151 



(33) 



(34) 



v eq (t) = -(GB)" 1 [GBR-^Pz^t) + GcD{t,z)\ 

Substituting (33) into (25), the sliding mode dynamics becomes 

z = Az- B{GB)~ 1 (GBR-^Pz + Gco) + co 

= Az-BR- 1 B T Pz-B(GBy 1 Gco + co 

= Az- BR~ 1 B T Pz - B(GB)" 1 GBcb + Bcb 

= (A-BR~ 1 B T P)z 

Comparing (34) with (30), we can see that the sliding mode of uncertain linear system (25) is 
the same as optimal dynamics of (26), thus the sliding mode is also asymptotically stable, 
and the sliding motion guarantees the controlled system global robustness to the uncertainties 
which satisfy the matching condition. We call (31) a global robust optimal sliding surface. 
Substituting state transformation z = T(x) into (31), we can get the optimal switching 
function s(x,t) in the x -coordinates. 

3.3.3 The control law 

After designing the optimal sliding surface, the next step is to select a control law to ensure 

the reachability of sliding mode in finite time. 

Differentiating s(x,t) with respect to t and considering system (20), we have 



ds . ds ds , r/ . / v v ds 

s = ^ x + ^ = ^fW + ^ x » + T7- 

dx dt dx dt 

Let s = , the equivalent control of nonlinear nominal system (20) is obtained 

-i-l r 



"«,(') = 



g(*) 



ds r . x ds 
— f(x) + — 

dx dt 



(35) 



(36) 



Considering equation (23), we have u = E 1 (x)[v -K(x)] . 
Now, we select the control law in the form of 



M disW : 





U(t] 


= M con(0 


+ "dis(0/ 




M can(*) = - 


ds 

T-8( x ) 

_dx 


-1 


ds r/ . ds 
— f(x) + — , 

_dx JK } dt] 


--- 


ds 

-z-g(x) 

_dx 


-l 

(7 + (/o + rilMI) 


ds 

T-g(*) 

OX 



(37) 



)sg n ( s )> 



where sgn(s) = [sgn(s a ) sgn(s 2 ) ••• sgn(s m )] and //>0. u con (t) and u dis (t) denote 
continuous part and discontinuous part of u(t) , respectively. 

The continuous part u con (t) , which is equal to the equivalent control of nominal system (20), 
is used to stabilize and optimize the nominal system. The discontinuous part u dis (t) 
provides the complete compensation of uncertainties for the uncertain system (19). 
Theorem 2. Consider uncertain affine nonlinear system (19) with Assumputions 5-6. Let 
u and sliding surface be given by (37) and (31), respectively. The control law can force the 
system trajectories to reach the sliding surface in finite time and maintain on it thereafter. 



152 



Robust Control, Theory and Applications 



Proof. Utilizing V = (1 / 2)s s as a Lyapunov function candidate, and taking the Assumption 5 
and Assumption 6, we have 

tV T • T / dS . r j. OS. 

ox ot 



-AtJ- 


ds . ds 

— / + — 

dx di 


r 


+ (ro+ri\\x 


1) 


as 
ax 


sgn(s) 


as t 5s 1 

dx dt\ 






-4 


7 + (ro+riH)-^ 

|| 5x 


} ds 
' sgn(s)Us T — ^ = -/7|H| 1 -(^o+riH) 


ds 1 
dx || 


|s| 1+ s T !V (38) 

1 dx 


^-^INIi-^o + riWI) 


as 

ax 


II II & 

1 fax 


INI HI * 




< — 77||s|| 


!-(r 


o + riH) 


as 

dx 


ll s li + (^o + 


ri 


WD 


as 

dx 


IN 









where |»| denotes the 1-norm. Noting the fact that |s| > |s|, we get 



V = s s<-r/\\s\\<0 ,for|s|U0. 



(39) 



This implies that the trajectories of the uncertain nonlinear system (19) will be globally 
driven onto the specified sliding surface s = despite the uncertainties in finite time. The 
proof is complete. 

From (31), we have s(0) = , that is the initial condition is on the sliding surface. According 
to Theorem2, we know that the uncertain system (19) with the integral sliding surface (31) 
and the control law (37) can achieve global sliding mode. So the system designed is global 
robust and optimal. 

3.4 A simulation example 

Inverted pendulum is widely used for testing control algorithms. In many existing 

literatures, the inverted pendulum is customarily modeled by nonlinear system, and the 

approximate linearization is adopted to transform the nonlinear systems into a linear one, 

then a LQR is designed for the linear system. 

To verify the effectiveness and superiority of the proposed GROSMC, we apply it to a single 

inverted pendulum in comparison with conventional LQR. 

The nonlinear differential equation of the single inverted pendulum is 



. _ g sin x 1 - amhx\ sin x 1 cos x 1 + au cos x 1 

X2 — 2 

L(4 / 3-amcos x x ) 



■d(t), 



(40) 



where x 1 is the angular position of the pendulum (rad) , x 2 is the angular speed (rad/s) , 
M is the mass of the cart, m and L are the mass and half length of the pendulum, 
respectively, u denotes the control input, g is the gravity acceleration, d(t) represents the 
external disturbances, and the coefficient a = m / (M + m) . The simulation parameters are as 
follows: M = 1 kg , m = 0.2 kg , L = 0.5 m , g = 9.8 m/s 2 , and the initial state vector is 
x(0) = [-tt /IS 0] T . 



Optimal Sliding Mode Control for a Class of 

Uncertain Nonlinear Systems Based on Feedback Linearization 



153 



Two cases with parameter variations in the inverted pendulum and external disturbance are 

considered here. 

Case 1: The m and L are 4 times the parameters given above, respectively. Fig. 3 shows the 

robustness to parameter variations by the suggested GROSMC and conventional 

LQR. 
Case 2: Apply an external disturbance d(t) = 0.01 sin 2t to the inverted pendulum system at 

t = 9s. Fig. 4 depicts the different responses of these two controllers to external 

disturbance. 



0.02 

-0.02 
-0.04 
-0.06 
-0.08 

-0.1 
-0.12 
-0.14 
-0.16 ^ 
-0.18 



1 


/-' III 






m=0.2 kg,L=0.5 m 
m=0.8 kg,L=2 m 




/ 




1 


i 




; ; ; ; 


ii 


ii 



0.02 



-0.02 

® -0.04 

§ -0.06 

o -0.08 

| -0.1 

en 

< -0.12 

-0.14 

-0.16 

-0.18 





/ A" 






— m=0.2 kg,L=0.5 m 
m=0.8 kg,L=2 m 
















l_ _L I I 





t(s) 



t(s) 

(a) By GROSMC (b) By Conventional LQR. 

Fig. 3. Angular position responses of the inverted pendulum with parameter variation 









1 ^T^ 


-z--'^^ 


-0.02 




L _ 


— Optimal Control 




-0 04 


\< 


-X 


GROSMC 




-0.06 










;'l 








-0.08 


<> 








V 








-0.1 


I 








1 








-0.12 


11 
















-0.14 


















-0.16 















10 
t(s) 



Fig. 4. Angular position responses of the inverted pendulum with external disturbance. 

From Fig. 3 we can see that the angular position responses of inverted pendulum with and 
without parameter variations are exactly same by the proposed GROSMC, but the responses 
are obviously sensitive to parameter variations by the conventional LQR. It shows that the 
proposed GROSMC guarantees the controlled system complete robustness to parameter 
variation. As depicted in Fig. 4, without external disturbance, the controlled system could be 
driven to the equilibrium point by both of the controllers at about t = 2s. However, when 
the external disturbance is applied to the controlled system at t = 9s, the inverted 
pendulum system could still maintain the equilibrium state by GROSMC while the LQR not. 



154 



Robust Control, Theory and Applications 



The switching function curve is shown in Fig. 5. The sliding motion occurs from the 
beginning without any reaching phase as can be seen. Thus, the GROSMC provides better 
features than conventional LQR in terms of robustness to system uncertainties. 



0.5 
0.4 



0.2 



0.1 
e 

V) 

-0.1 
-0.2 
-0.3 
-0.4 
-0.5 



— Sliding Surface 





















t(s) 



Fig. 5. The switching function s(t) 



3.5 Conclusion 

In this section, the exact linearization technique is firstly adopted to transform an uncertain 
affine nonlinear system into a linear one. An optimal controller is designed to the linear 
nominal system, which not only simplifies the optimal controller design, but also makes the 
optimal control applicable to the entire transformation region. The sliding mode control is 
employed to robustfy the optimal regulator. The uncertain system with the proposed 
integral sliding surface and the control law achieves global sliding mode, and the ideal 
sliding dynamics can minimized the given quadratic performance index. In summary, the 
system designed is global robust and optimal. 

4. Optimal sliding mode tracking control for uncertain nonlinear system 

With the industrial development, there are more and more control objectives about the 
system tracking problem (Ouyang et al., 2006; Mauder, 2008; Smolders et al., 2008), which is 
very important in control theory synthesis. Taking the robot as an example, it is often 
required to follow some special trajectories quickly as well as provide robustness to system 
uncertainties, including unmodeled dynamics, internal parameter variations and external 
disturbances. So the main tracking control problem becomes how to design the controller, 
which can not only get good tracking performance but also reject the uncertainties 
effectively to ensure the system better dynamic performance. In this section, a robust LQR 
tracking control based on intergral sliding mode is proposed for a class of nonlinear 
uncertain systems. 



4.1 Problem formulation and assumption 

Consider a class of uncertain affine nonlinear systems as follows: 



x = f(x) + Af(x) + g(x)[u + S(x,t,u)] 
y = h(x) 



(41) 



Optimal Sliding Mode Control for a Class of 

Uncertain Nonlinear Systems Based on Feedback Linearization 155 

where x e R n is the state vector, u e R m is the control input with m = 1 , and y e jR is the 

system output. f(x) , g(x) , A/(x) and h(x) are sufficiently smooth in domain D czR n . 

8{x,t,u) is continuous with respect to t and smooth in (x,u) . Af(x) and 5{x,t,u) represent 

the system uncertainties, including unmodelled dynamics, parameter variations and 

external disturbances. 

Our goal is to design an optimal LQR such that the output y can track a reference 

trajectory y r (t) asymptotically, some given performance criterion can be minimized, and the 

system can exhibit robustness to uncertainties. 

Assumption 7. The nominal system of uncertain affine nonlinear system (41), that is 

x = f(x) + g(x)u ^ 



[y = H x ) 

has the relative degree p in domain D and p = n . 

Assumption 8. The reference trajectory y T (t) and its derivations y[ l \t) (i = l,---,n)can be 

obtained online, and they are limited to all t > . 

While as we know, if the optimal LQR is applied to nonlinear systems, it often leads to 

nonlinear TPBV problem and an analytical solution generally does not exist. In order to 

simplify the design of this tracking problem, the input-output linearization technique is 

adopted firstly. 

Considering system (41) and differentiating y , we have 

y {k) =L k f h(x), 0<k<n-l 

y (n) = L n f h(x) + LyLy^ix) + h^\(x)\u + 8(x,t,u)\. 
According to the input-out linearization, choose the following nonlinear state transformation 

z = T(x) = [h(x) .- L}- a /z(x)] T . (43) 

So the uncertain affine nonlinear system (40) can be written as 

Zi=z i+1 , i = !,-•• f n-l 

z n = L n f h(x) + L^L^hix) + h^\(x)\u + 8(x f t f u)\. 

Define an error state vector in the form of 



z -v (n_1) 



= z-K, 



where ^R = [y • • • y^ n ^ 1 -By this variable substitution e = z - $R , the error state equation 
can be described as follows: 

e i ~ e i+i' i = l,-',n-l 

e n = L n f h(x) + L^ x h(x) + h g ^\{x)u(i) + L g L n f 1 h(x)S(x / 1, u) - y[ n) (t). 



156 



Robust Control, Theory and Applications 



Let the feedback control law be selected as 



u(t) = 



-L n f h(x) + v(t) + y^(t) 
Lff^x) 



(44) 



The error equation of system (40) can be given in the following forms: 



m- 






1 





... 








1 


... 











... 1 















it) + 








"0" 

















+ 





v(t) + 


Af L n f 1 h(x)_ 




1 









LL n f 1 h(x)S(x / t / u) 



(45) 



Therefore, equation (45) can be rewritten as 

e(t) = Ae(t) + AA + Bv(t) + AS. 
where 



(46) 





"0 1 


o ... 


0" 




"0" 






1 ••• 









A = 


o o o ... 1 




, B = 




1 


' 





















, AS = 





v* 


y 1 h(x)_ 




I 


g iy\{: 


t)S 


(x,t,u) 



AA-- 



As can be seen, e g R n is the system error vector, v e R is a new control input of the 
transformed system. A e R nxn and B e R nxm are corresponding constant matrixes. AA and 
A£ represent uncertainties of the transformed system. 

Assumption 9. There exist unknown continuous function vectors of appropriate dimensions 
AA and A^ , such that 

AA = BAA , A£ = BAS 
Assumption 10. There exist known constants a m , b m such that 

H<a m ,|A^|<^ 

Now, the tracking problem becomes to design a state feedback control law v such that 
e -^ asymptotically. If there is no uncertainty, i.e. S(t,e) = , we can select the new input 
as v = -Ke to achieve the control objective and obtain the closed loop dynamics 
e = (A- BK)e . Good tracking performance can be achieved by choosing K using optimal 



Optimal Sliding Mode Control for a Class of 

Uncertain Nonlinear Systems Based on Feedback Linearization 157 

control theory so that the closed loop dynamics is asymptotically stable. However, in 
presence of the uncertainties, the closed loop performance may be deteriorated. In the 
next section, the integral sliding mode control is adopted to robustify the optimal control 
law. 

4.2 Design of optimal sliding mode tracking controller 

4.2.1 Optimal tracking control of nominal system. 

Ignoring the uncertainties of system (46), the corresponding nominal system is 

e(t) = Ae(t) + Bv(t). (47) 

For the nominal system (47), let v = v and v can minimize the quadratic performance 
index as follows: 

I = l^[e\t)Qe(t) + v T (t)Rv (t)]dt (48) 

where Q e R nxn is a symmetric positive definite matrix, R e R mxm (here m = 1 ) is a positive 

definite matrix. 

According to optimal control theory, an optimal feedback control law can be obtained as: 

v (t) = -R- 1 B T Pe(t) (49) 

with P the solution of matrix Riccati equation 

PA + A T P - PBRT^P + Q = 0. 
So the closed-loop system dynamics is 

e(t) = (A-BR- 1 B T P)e(t). (50) 

The designed optimal controller for system (47) is sensitive to system uncertainties 
including parameter variations and external disturbances. The performance index (48) may 
deviate from the optimal value. In the next part, we will use integral sliding mode control 
technique to robustify the optimal control law so that the uncertain system trajectory could 
be same as nominal system. 

4.2.2 The robust optimal sliding surface. 

To get better tracking performance, an integral sliding surface is defined as 

s(e, t) = Ge(t) -Gf(A- BR^B 1 P)e{r)dr - Ge(Q), (51) 

where G e R mxn i s a constant matrix which is designed so that GB is nonsingular. And e(0) 

is the initial error state vector. 

Differentiating (51) with respect to t and considering system (46), we obtain 

s(e, t) = Ge(t) - G(A - BR~ 1 B T P)e(t) 

= G[Ae(t) + AA + Bv(t) + AS] - G(A - BR~ 1 B T P)e(t) (52) 

-i d t 



GBv(t) + GBR-'B L Pe(t) + G(AA + AS). 



1 58 Robust Control, Theory and Applications 

Let s(e,t) = , the equivalent control can be obtained by 

v eq (t) = -(GB)- 1 [GBR- 1 B T Pe(0 + G(AA + AS)]. (53) 

Substituting (53) into (46), and considering Assumption 10, the ideal sliding mode dynamics 
becomes 

e(t) = Ae(t) + AA + Bv (t) + AS 

= Ae(t) + AA- B(GB)~ 1 [GBR~ 1 B Y Pe(t) + G(AA + AS)] + AS 

= (A - BR~ 1 B T P)e(t) - B(GB)~ 1 G[ AA + AS] + AA + AS (54) 

= (A - BR~ 1 B T P)e(t) - B(GB)~ 1 GB(AA + AS) + B(AA + AS) 

= (A-BR~ 1 B T P)e(t). 

It can be seen from equation (50) and (54) that the ideal sliding motion of uncertain system 
and the optimal dynamics of the nominal system are uniform, thus the sliding mode is also 
asymptotically stable, and the sliding mode guarantees system (46) complete robustness to 
uncertainties. Therefore, (51) is called a robust optimal sliding surface. 

4.2.3 The control law. 

For uncertain system (46), we propose a control law in the form of 

v(t) = v c (t) + v d (t), 

v c (t) = -R- 1 B T Pe(t), (55) 

v d (t) = -(GB)- 1 [ks + e s S n(s)]. 

where v c is the continuous part, which is used to stabilize and optimize the nominal 

system. And v d is the discontinuous part, which provides complete compensation for 

system uncertainties, sgn(s) = [sgn(s a ) ••• sgn(s m )] . k and s are appropriate positive 

constants, respectively. 

Theorem 3. Consider uncertain system (46) with Assumption9-10. Let the input v and the 

sliding surface be given as (55) and (51), respectively. The control law can force system 

trajectories to reach the sliding surface in finite time and maintain on it thereafter if 

s>(a m +d m )\\GB\\. 

Proof: Utilizing V = (1 / 2)s T s as a Lyapunov function candidate, and considering 

Assumption 9-10, we obtain 

V = s T s = s T [Ge(t) - G(A - BR~ 1 B T P)e(t)] 
= s T {G[Ae(t) + AA + Bv(t) + AS] - G(A - BR- 1 B T P)e(t)] 
= s T [GAA - GBR^Peit) -(ks + s sgn(s)) + GAS + GBR^Pe^ 
= s T {- [ks + s sgn(s)] + GAA + GAS) = -k ||s|| a - e \\s\\ + s T (GAA + GAS) 
^ -^ll s lli - ^ll s ll + (^ m + ^ m )||GS||||^|| ^ -^ll^ll - [^ - (^ m + ^ m )|| GB ll]ll s ll 

where |»| denotes the 1-norm. Note the fact that for any |s| * , we have |s| > \\s\\ . If 
^K+^)|GB|,then 



Optimal Sliding Mode Control for a Class of 

Uncertain Nonlinear Systems Based on Feedback Linearization 



159 



V 



--s T s<- 



-^ IpIj +||G|| 



|4|s||<-(f-|G|W)|s|<0. 



(56) 



This implies that the trajectories of uncertain system (46) will be globally driven onto the 

specified sliding surface s(e,t) = in finite time and maintain on it thereafter. The proof is 

completed. 

From (51), we have s(0) = , that is to say, the initial condition is on the sliding surface. 

According to Theorem3, uncertain system (46) achieves global sliding mode with the 

integral sliding surface (51) and the control law (55). So the system designed is global robust 

and optimal, good tracking performance can be obtained with this proposed algorithm. 



4.3 Application to robots. 

In the recent decades, the tracking control of robot manipulators has received a great of 
attention. To obtain high-precision control performance, the controller is designed which 
can make each joint track a desired trajectory as close as possible. It is rather difficult to 
control robots due to their highly nonlinear, time-varying dynamic behavior and uncertainties 
such as parameter variations, external disturbances and unmodeled dynamics. In this 
section, the robot model is investigated to verify the effectiveness of the proposed method. 
A 1-DOF robot mathematical model is described by the following nonlinear dynamics: 











1 


r- ~ 

















C(q,q) 


q 


- 


G(q) 


+ 


1 


T - 


1 


M(q)\ 


w_ 




[M(q)_ 




[M(q)\ 




[M(q)\ 



d(t), 



(57) 



where q, q denote the robot joint position and velocity, respectively, r is the control vector 
of torque by the joint actuators, m and / are the mass and length of the manipulator arm, 
respectively. d(t) is the system uncertainties. C(q,q) = 0. 03 cos(q), G(q) = mglcos(q), 
M(q) = 0.1 + 0.06sin(^). The reference trajectory is y T (t) = sin^-f . 
According to input-output linearization technique, choose a state vector as follows: 

V 

.4. 

Define an error state vector of system (57) as e = \e x e 2 ] = [q-y T 4 — 3/r ] / anc ^ ^ the 

control law r = (v + y r )M(q) + C(q,q)q + G(q) . 

So the error state dynamic of the robot can be written as: 



V 




"0 1* 


V 


+ 


"0" 


v- 





L^2_ 




L° °J 


L^2_ 




1 




[l/M(q)\ 



d(t) 



(58) 



Choose the sliding mode surface and the control law in the form of (51) and (55), 
respectively, and the quadratic performance index in the form of (48). The simulation 
parameters are as follows: ra = 0.02, g = 9.8, 1 = 0.5, d(t) = 0.5 sin27rt, A: = 18, s = 6, 

l~10 2l t 

G = [0 l], Q= t R = 1 . The initial error state vector is e = [0.5 0] . 

The tracking responses of the joint position ^and its velocity are shown in Fig. 6 and Fig. 7, 
respectively. The control input x is displayed in Fig. 8. From Fig. 6 and Fig. 7 it can be seen 
that the position error can reach the equilibrium point quickly and the position track the 



160 



Robust Control, Theory and Applications 



reference sine signal y r well. Simulation results show that the proposed scheme manifest 
good tracking performance and the robustness to parameter variations and the load 
disturbance. 

4.4 Conclusions 

In order to achieve good tracking performance for a class of nonlinear uncertain systems, a 
sliding mode LQR tracking control is developed. The input-output linearization is used to 
transform the nonlinear system into an equivalent linear one so that the system can be 
handled easily. With the proposed control law and the robust optimal sliding surface, the 
system output is forced to follow the given trajectory and the tracking error can minimize 
the given performance index even if there are uncertainties. The proposed algorithm is 
applied to a robot described by a nonlinear model with uncertainties. Simulation results 
illustrate the feasibility of the proposed controller for trajectory tracking and its capability of 
rejecting system uncertainties. 




Fig. 6. The tracking response of q 



- reference speed 
— speed response 




Fig. 7. The tracking response of q 



Optimal Sliding Mode Control for a Class of 

Uncertain Nonlinear Systems Based on Feedback Linearization 



161 



1.5 








1 


I n 


A, A, , A 


An a. ^ 


0.5 



I ill | 

; 1 1 1 J 


'l 


lj\ 


\ i 


iljl 


1 i'l 1 


■Vi 


-0.5 


1 1 

lit ,i 


'ill il' I'l 1 ' 


\ n 1 ' kl'l ' 1 1 


-1 






f" 


||| 1 1 


-1.5 


| | 




1 1 




1 II I 


-2 




i 


f f f 


I 1 I 


-2.5 









Fig. 8. The control input r 

5. Acknowledgements 

This work is supported by National Nature Science Foundation under Grant No. 60940018. 



6. References 

Basin, M.; Rodriguez-Gonzaleza, J.; Fridman, L. (2007). Optimal and robust control for linear 

state-delay systems. Journal of the Franklin Institute. Vol.344, pp.830-845. 
Chen, W. D.; Tang, D. Z.; Wang, H. T. (2004). Robust Tracking Control Of Robot M 

anipulators Using Backstepping. Joranal of System Simulation. Vol. 16, No. 4, pp. 837- 

837,841. 
Choi, S. B.; Cheong, C. C; Park, D. W. (1993). Moving switch surfaces for robust control of 

second-order variable structure systems. International Journal of Control. Vol.58, 

No.l, pp. 229-245. 
Choi, S. B.; Park, D. W.; Jayasuriya, S. (1994). A time-varying sliding surface for fast and 

robust tracking control of second-order uncertain systems. Automatica. Vol.30, No. 5, 

pp. 899-904. 
Chiou, K. C; Huang, S. J. (2005). An adaptive fuzzy controller for robot manipulators. 

Mechatronics. Vol.15, pp. 151 -177. 
Cimen, T.; Banks, S. P. (2004). Global optimal feedback control for general nonlinear systems 

with nonquadratic performance criteria. System & Control Letters. Vol.53, pp.327- 

346. 
Cimen, T.; Banks, S. P. (2004). Nonlinear optimal tracking control with application to super- 
tankers for autopilot design. Automatica. Vol.40, No. 11, pp.1845 - 1863. 
Gao, W. B.; Hung, J C. (1993). Variable structure control of nonlinear systems: A new 

approach. IEEE Transactions on Industrial Electronics. Vol.40, No.l, pp. 45-55. 
Ho, H. F.; Wong, Y. K.; Rad, A. B. (2007). Robust fuzzy tracking control for robotic 

manipulators. Simulation Modelling Practice and Theory. Vol.15, pp.801-816. 
Laghrouche, S.; Plestan, F.; Glumineau, A. (2007). Higher order sliding mode control based 

on integral sliding mode. Automatica. Vol. 43, pp.531-537. 



1 62 Robust Control, Theory and Applications 

Lee, J. H. (2006). Highly robust position control of BLDDSM using an improved integral 

variable structure system. Automatica. Vol.42, pp.929-935. 
Lin, F. J.; Chou, W. D. (2003). An induction motor servo drive using sliding mode controller 

with genetic algorithm. Electric Power Systems Research. Vol.64, pp.93-108. 
Mokhtari, A.; Benallegue, A.; Orlov, Y. (2006). Exact linearization and sliding mode observer 

for a quadrotor unmanned aerial vehicle. International Journal of Robotics and 

Automation. Vol. 21, No.l, pp.39-49. 
Mauder, M. Robust tracking control of nonholonomic dynamic systems with application to 

the bi-steerable mobile robot. (2008). Automatica. Vol.44, No.10, pp.2588-2592. 
Ouyang, P. R.; Zhang, W. J.; Madan M. Gupta. (2006). An adaptive switching learning 

control method for trajectory tracking of robot manipulators. Mechatronics. Vol.16, 

No.l, pp.51-61. 
Tang, G. Y.; Sun, H. Y.; Pang, H. P. (2008). Approximately optimal tracking control for 

discrete time-delay systems with disturbances. Progress in Natural Science. Vol.18, 

pp. 225-231. 
Pang, H. P.; Chen, X. (2009). Global robust optimal sliding mode control for uncertain affine 

nonlinear systems. Journal of Systems Engineering and Electronics. Vol.20, No.4, pp. 

838-843. 
Pang H. P.; Wang L.P. (2009). Global Robust Optimal Sliding Mode Control for a class of 

Affine Nonlinear Systems with Uncertainties Based on SDRE. Proceeding of 2009 

Second International Workshop on Computer Science and Engineering. Vol. 2, pp. 276- 

280. 
Pang, H. P.; Tang, G. Y.; Sun, H.Y. (2009). Optimal Sliding Mode Design for a Class of 

Uncertain Systems with Time-delay. Information and Control. Vol.38, No.l, pp.87-92. 
Tang, G. Y.; Gao, D. X. (2005). Feedforward and feedback optimal control for nonlinear 

systems with persistent disturbances. Control and Decision. Vol.20, No.4, pp. 366- 

371. 
Tang, G. Y. (2005). Suboptimal control for nonlinear systems: a successive approximation 

approach. Systems and Control Letters. Vol. 54, No.5, pp .429-434. 
Tang, G. Y.; Zhao, Y. D.; Zhang, B. L. (2007). Optimal output tracking control for nonlinear 

systems via successive approximation approach. Nonlinear Analysis. Vol.66, No.6, 

pp.1365-1377. 
Shamma, J. S.; Cloutier, J. R. (2001). Existence of SDRE stabilizing feedback. Proceedings of the 

American Control Conference, pp.4253-4257, Arlington VA. 
Smolders, K.; Volckaert, M. Swevers, J. (2008). Tracking control of nonlinear lumped 

mechanical continuous-time systems: A model-based iterative learning approach. 

Mechanical Systems and Signal Processing. Vol.22, No.8, pp.1896-1916. 
Yang K. D.; Ozgiiner. U. (1997). Sliding-mode design for robust linear optimal control. 

Automatica. Vol. 33, No. 7, pp. 1313-1323. 



8 



Robust Delay-lndependent/Dependent 

Stabilization of Uncertain Time-Delay 

Systems by Variable Structure Control 

Elbrous M. Jafarov 

Faculty of Aeronautics and Astronautics, Istanbul Technical University 

Turkey 



1. Introduction 



It is well known that many engineering control systems such as conventional oil-chemical 
industrial processes, nuclear reactors, long transmission lines in pneumatic, hydraulic and 
rolling mill systems, flexible joint robotic manipulators and machine-tool systems, jet engine 
and automobile control, human-autopilot systems, ground controlled satellite and 
networked control and communication systems, space autopilot and missile-guidance 
systems, etc. contain some time-delay effects, model uncertainties and external disturbances. 
These processes and plants can be modeled by some uncertain dynamical systems with state 
and input delays. The existence of time-delay effects is frequently a source of instability and 
it degrades the control performances. The stabilization of systems with time-delay is not 
easier than that of systems without time-delay. Therefore, the stability analysis and 
controller design for uncertain systems with delay are important both in theory and in 
practice. The problem of robust stabilization of uncertain time-delay systems by various 
types of controllers such as PID controller, Smith predictor, and time-delay controller, 
recently, sliding mode controllers have received considerable attention of researchers. 
However, in contrast to variable structure systems without time-delay, there is relatively no 
large number of papers concerning the sliding mode control of time-delay systems. 
Generally, stability analysis can be divided into two categories: delay-independent and 
delay-dependent. It is worth to mention that delay-dependent conditions are less 
conservative than delay-independent ones because of using the information on the size of 
delays, especially when time-delays are small. As known from (Utkin, 1977)-(Jafarov, 2009) 
etc. sliding mode control has several useful advantages, e.g. fast response, good transient 
performance, and robustness to the plant parameter variations and external disturbances. 
For this reason, now, sliding mode control is considered as an efficient tool to design of 
robust controllers for stabilization of complex systems with parameter perturbations and 
external disturbances. Some new problems of the sliding mode control of time-delay 
systems have been addressed in papers (Shyu & Yan, 1993)-(Jafarov, 2005). Shyu and Yan 
(Shyu & Yan, 1993) have established a new sufficient condition to guarantee the robust 
stability and ^-stability for uncertain systems with single time-delay. By these conditions a 
variable structure controller is designed to stabilize the time-delay systems with 
uncertainties. Koshkoei and Zinober (Koshkouei & Zinober, 1996) have designed a new 



1 64 Robust Control, Theory and Applications 

sliding mode controller for MIMO canonical controllable time-delay systems with matched 
external disturbances by using Lyapunov-Krasovskii functional. Robust stabilization of 
time-delay systems with uncertainties by using sliding mode control has been considered by 
Luo, De La Sen and Rodellar (Luo et al., 1997). However, disadvantage of this design 
approach is that, a variable structure controller is not simple. Moreover, equivalent control 
term depends on unavailable external disturbances. Li and DeCarlo (Li & De Carlo, 2003) 
have proposed a new robust four terms sliding mode controller design method for a class 
of multivariable time-delay systems with unmatched parameter uncertainties and matched 
external disturbances by using the Lyapunov-Krasovskii functional combined by LMFs 
techniques. The behavior and design of sliding mode control systems with state and input 
delays are considered by Perruquetti and Barbot (Perruquetti & Barbot, 2002) by using 
Lyapunov-Krasovskii functional. 

Four-term robust sliding mode controllers for matched uncertain systems with single or 
multiple, constant or time varying state delays are designed by Gouaisbaut, Dambrine and 
Richard (Gouisbaut et al., 2002) by using Lyapunov-Krasovskii functionals and Lyapunov- 
Razumikhin function combined with LMFs techniques. The five terms sliding mode 
controllers for time- varying delay systems with structured parameter uncertainties have 
been designed by Fridman, Gouisbaut, Dambrine and Richard (Fridman et al., 2003) via 
descriptor approach combined by Lyapunov-Krasovskii functional method. In (Cao et al., 
2007) some new delay-dependent stability criteria for multivariable uncertain networked 
control systems with several constant delays based on Lyapunov-Krasovskii functional 
combined with descriptor approach and LMI techniques are developed by Cao, Zhong and 
Hu. A robust sliding mode control of single state delayed uncertain systems with parameter 
perturbations and external disturbances is designed by Jafarov (Jafarov, 2005). In survey 
paper (Hung et al., 1993) the various type of reaching conditions, variable structure control 
laws, switching schemes and its application in industrial systems is reported by J. Y.Hung, 
Gao and J.C.Hung. The implementation of a tracking variable structure controller with 
boundary layer and feed-forward term for robotic arms is developed by Xu, Hashimoto, 
Slotine, Arai and Harashima(Xu et al., 1989). A new fast-response sliding mode current 
controller for boost-type converters is designed by Tan, Lai, Tse, Martinez-Salamero and Wu 
(Tan et al., 2007). By constructing new types of Lyapunov functionals and additional free- 
weighting matrices, some new less conservative delay-dependent stability conditions for 
uncertain systems with constant but unknown time-delay have been presented in (Li et al., 
2010) and its references. 

Motivated by these investigations, the problem of sliding mode controller design for 
uncertain multi-input systems with several fixed state delays for delay-independent and 
delay-dependent cases is addressed in this chapter. A new combined sliding mode 
controller is considered and it is designed for the stabilization of perturbed multi-input 
time-delay systems with matched parameter uncertainties and external disturbances. Delay- 
independent/ dependent stability and sliding mode existence conditions are derived by 
using Lyapunov-Krasovskii functional and Lyapunov function method and formulated in 
terms of LMI. Delay bounds are determined from the improved stability conditions. In 
practical implementation chattering problem can be avoided by using saturation function 
(Hung et al, 1993), (Xu et al, 1989). 

Five numerical examples with simulation results are given to illustrate the usefulness of the 
proposed design method. 



Robust Delay-lndependent/Dependent Stabilization of 

Uncertain Time-Delay Systems by Variable Structure Control 1 65 

2. System description and assumptions 

Let us consider a multi-input state time-delay systems with matched parameter uncertainties 
and external disturbances described by the following state-space equation: 

x(t) = (A + AA )x(t) + (A 1 + AA x )x{t -h 1 ) + ... + (A N + AA N )x(t -h N ) + Bu(t) + Df(t), t > 

x(t) = 0(t), -h<t<0 (1) 

where x(t) e R n is the measurable state vector, u(t) eR m is the control input, A ,A 1 ,..,A N 
and B are known constant matrices of appropriate dimensions, with B of full rank, 
h = mdix[h 1 ,h 1 ,...,h N ],h l >0 , h lf h 2f ... f h N are known constant time-delays, (/){t) is a 
continuous vector-valued initial function in -h<t<0; AA ,AA 1/ ...,AA N and D are the 
parameter uncertainties, (/){t) is unknown but norm-bounded external disturbances. 
Taking known advantages of sliding mode, we want to design a simple suitable sliding 
mode controller for stabilization of uncertain time-delay system (1). 
We need to make the following conventional assumptions for our design problem. 
Assumption 1: 

a. (A ,B) is stabilizable; 

b. The parameter uncertainties and external disturbances are matched with the control 
input, i.e. there exist matrices E (t), E(t), E 1 (t), . . . , E N (t ) , such that: 

AA (t) = BE (t) ; A 1 (t) = BE 1 (t) ; ...,AA N (t) = BE N (t) ; D(t) = BE(t) (2) 

with norm-bounded matrices: 

max|zlE (f)| < a ; max|zlE 1 (f)| < a x ; ...,max|zlE N (f)| < a N 

\\E(t)\\ = a 



l/Wl^/o (3) 

where a^,a x ,a x ,...a n ,g and f are known positive scalars. 

The control goal is to design a combined variable structure controller for robust stabilization 

of time-delay system (1) with matched parameter uncertainties and external disturbances. 

3. Control law and sliding surface 

To achieve this goal, we form the following type of combined variable structure controller: 

^) = u Un (t) + u eq (t) + u vs (t) + u r (t) (4) 

where 

Ui m (t) = -Gx(t) (5) 

u eq (t) = -(CB)" 1 [CA x(t) + CA lX (t - V + ■ • ■ + CA N x(t -h N )] (6) 



1 66 Robust Control, Theory and Applications 

u vs (t) = -[k \\x(t)\\ + k4x(t-h,)\\ + ...,+k N \\x(t-h N )\\]^ (7) 

"■- s m 

where k§,k\,--->k^ and 5 are the scalar gain parameters to be selected; Gis a design matrix; 
(CBy 1 is a non-singular m x m matrix. The sliding surface on which the perturbed time-delay 
system states must be stable is defined as a linear function of the undelayed system states as 
follows: 

s(t) = rCx(t) (9) 

where C is a m x n gain matrix of full rank to be selected; r is chosen as identity mxm 
matrix that is used to diagonalize the control. 

Equivalent control term (6) for non-perturbed time-delay system is determined from the 
following equations: 

s(t) = Cx(t) = CA x(t) + CA x x{t -h 1 ) + ... + CA N x(t -h N ) + CBu(t) = (10) 

Substituting (6) into (1) we have a non-perturbed or ideal sliding time-delay motion of the 
nominal system as follows: 

x(t) = Aox(t) + Aix(t-h 1 ) + ... + A N x(t-h N ) (H) 

where 

(CBy'C = G eq , A - BG eq A = Ac , A 1 - BG eq A x = M, ..., A N - BG eq A N = A N (12) 

Note that, constructed sliding mode controller consists of four terms: 

1. The linear control term is needed to guarantee that the system states can be stabilized 
on the sliding surface; 

2. The equivalent control term for the compensation of the nominal part of the perturbed 
time-delay system; 

3. The variable structure control term for the compensation of parameter uncertainties of 
the system matrices; 

4. The min-max or relay term for the rejection of the external disturbances. 

Structure of these control terms is typical and very simple in their practical implementation. 
The design parameters G,C,k Q ,k 1 ,...,k- N 8 of the combined controller (4) for delay- 
independent case can be selected from the sliding conditions and stability analysis of the 
perturbed sliding time-delay system. 

However, in order to make the delay-dependent stability analysis and choosing an 
appropriate Lyapunov-Krasovskii functional first let us transform the nominal sliding time- 
delay system (11) by using the Leibniz-Newton formula. Since x(t) is continuously 
differentiable for t > 0, using the Leibniz-Newton formula, the time-delay terms can be 
presented as: 

t t 

x(t-h 1 ) = x(t)- j* x(0)d0,...,x(t-h N ) = x(t)- j* x{6)d6 (13) 

t-\ t-h N 



Robust Delay-lndependent/Dependent Stabilization of 

Uncertain Time-Delay Systems by Variable Structure Control 1 67 

Then, the system (11) can be rewritten as 

_ _ _ l _ l 

x(t) = (Ao+Ai+... + A N )x(t)-Ai J x(0)d0-...-A N J x(0)d0 (14) 

t-h t t-h N 

Substituting again (11) into (14) yields: 

t 
x(t) = (A +A 1 +... + A N )x(t)-A 1 J [A o x(0) + A 1 x(0-h 1 ) + ... + A N x(0-h N )]l0 

t-\ 
t 
-...-A N J [A o x(0) + A 1 x(0-h 1 ) + ... + A N x(0-h N )]l0 

t-h N (15) 

t t t 

= (A +A 1 +... + A N )x(t)-A 1 A J x(0)d0-A\ J x(0-h 1 )d0-...-A 1 A N J x(0-h N )d0 

t-\ t-hi t-\ 

t t t 

-...-A n Aq J x(0)d0-A N A 1 J x(0-h 1 )d0-...-A% } J x(0-h N )d0 

t-h N t-h N t-h N 

Then in adding to (15) the perturbed sliding time-delay system with control action (4) or 
overall closed loop system can be formulated as: 

t t 

x(t) = (A + A l +... + A N )x(t)-A l A J x{0)d0-Al J x(0-h l )d0 

t-h x t-\ 

t t t 

-...-A X A N J x(0-h N )d0-...-A N A o J x(0)d0-A N A l J x(0-h l )d0 

t-\ t-h N t-h N 

t 

-...-A 2 N J x(0-h N )d0 + AA o x(t) (16) 

t-h N 

+AA { x(t -h l ) + ... + AA N x(t - h N ) 

-B[k \\x(t)\\ + ^||x^ - ^)|| + .., + k N \\x(t - h N )\\\^-BS^- + Df(t) 

where A =A -BG, the gain matrix G can be selected such that A has the desirable 
eigenvalues. 

The design parameters G,C,kQ,k 1 ,...,k N S of the combined controller (4) for delay- 
dependent case can be selected from the sliding conditions and stability analysis of the 
perturbed sliding time-delay system (16). 

4. Robust delay-independent stabilization 

In this section, the existence condition of the sliding manifold and delay-independent 
stability analysis of perturbed sliding time-delay systems are presented. 

4.1 Robust delay-independent stabilization on the sliding surface 

In this section, the sliding manifold is designed so that on it or in its neighborhood in 
different from existing methods the perturbed sliding time-delay system (1),(4) is globally 



168 



Robust Control, Theory and Applications 



asymptotically stable with respect to state coordinates. The perturbed stability results are 

formulated in the following theorem. 

Theorem 1: Suppose that Assumption 1 holds. Then the multivariable time-delay 
system (1) with matched parameter perturbations and external disturbances driven by 
combined controller (4) and restricted to the sliding surface s(t)=0 is robustly globally 
asymptotically delay-independent stable with respect to the state variables, if the 
following LMI conditions and parameter requirements are satisfied: 



H= ^^ iVL - u <0 (17) 

(18) 
(19) 

(20) 

where P,R 1 ,...R N are some symmetric positive definite matrices which are a feasible 
solution of LMI (17) with (18); A = A - BG in which a gain matrix G can be assigned by 
pole placement such that A has some desirable eigenvalues. 

Proof: Choose a Lyapunov-Krasovskii functional candidate as follows: 



A T P + PA +R^+... + R N PAi .. 


PA N 


{PM) T -R a .. 





(PAn) t .. 


-R» 


CB = B T PB > 




^0 = ^O'^l = ^l'f-'f^N = a N 




s*h 





N t 



V = x T (t)Px(t) + ^ J x 1 \6)R i x{6)de 

i=lt-h: 



(21) 



The time-derivative of (21) along the state trajectories of time-delay system (1), (4) can be 
calculated as follows: 

V = 2x T (t)P[A x(t) + A x x(i -h 1 ) + ... + A N x(t -h N ) + AA x(t) + AA x x(i - \) 
+ ... + AA N x(t -h N ) + Bu(t) + Df(t)] 

+ x t (0^ 1 x(0-x t (^-/z 1 )^ 1 x(^-/z 1 ) + ... + x t (0^ n x(0-^ T (^-^ n )^n x (^-^n) 
- 2x T (t)PAox(t) + 2x T (t)PAix(t -h 1 ) + ... + 2x T (t)PA N x(t -h N ) + 2x T (t)PBE x(t) 
+ 2x T (t)PBE 1 x(t-h 1 ) + ... + 2x T (t)PBE N x(t-h N ) 

- 2x T (t)PB[k \\x(t)\\ + k x \\x(t - \ )|| + .., + k N \\x(t -h N )^ S{t) 



s(t)\\ 



- 2x T (t)PBGx(t) - 2Sx T (t)PB /^ + 2x T (t)PBEf(t) 



s (0| 



+ x T (t)(R 1 +... + R N )x(t)-x T (t-h 1 )R 1 x(t-h 1 )-...-x T (t-h N )R N x(t-h N ) 



Since x (t)PB = s (t) , then we obtain: 



Robust Delay-lndependent/Dependent Stabilization of 
Uncertain Time-Delay Systems by Variable Structure Control 



169 



V<x T (t) 



A P + PA +R 1 



. + R N ]x(t) 
-2x T (t)PA N x(t-h N ) 



+ 2x T (t)PAix(t-h 1 ) 

-x T (t-h 1 )R 1 x(t-h 1 )-...-x T (t-h 1 )R N x(t-h N ) 

+ 2s T (t)E x(t) + 2s T (f )Ei*(* -h 1 ) + ... + 2s T (t)E N x(t - h N ) 

+ 2s T (t)Ef(t)-2s T (t)[k \\x(t)\\ + k4x(t 

HOI HOI 

x(t) 
x(i-\) 

_x(t-h N ] 
- [(7c - « )||^:(0|||k(0|| + (^o. - «a)||^(^ - /z x )|| ||s(0|| + --- + (^^ - ^^)||^(^ - ^^)|||k(0||] 

-(*-/o)H')l 

Since (17)-(20) hold, then (22) reduces to: 



T 


A T P + PA +R 1 +. 


. + R N PAi . 


.. PA N 


r X (t) 




(PMf 


-Ri ■ 





x{t-\) 




(PA N ) T 


o . 


.. -R N _ 


At- h N) 



(22) 



V<z T (t)Hz(t)<0 



(23) 



where z T (t) = [x(t)x(t-h 1 )...x(t-h^)]. 

Therefore, we can conclude that the perturbed time-delay system (1), (4) is robustly globally 
asymptotically delay-independent stable with respect to the state coordinates. Theorem 1 is 
proved. 



4.2 Existence conditions 

The final step of the control design is the derivation of the sliding mode existence conditions 
or the reaching conditions for the perturbed time-delay system (1),(4) states to the sliding 
manifold in finite time. These results are summarized in the following theorem. 

Theorem 2: Suppose that Assumption 1 holds. Then the perturbed multivariable time- 
delay system (1) states with matched parameter uncertainties and external disturbances 
driven by controller (4) converge to the siding surface s(t)=0 in finite time, if the 
following conditions are satisfied: 



k =a +g;k 1 =a 1 ;...;k N =a N 

Proof: Let us choose a modified Lyapunov function candidate as: 

V=h T (t)(CB)-'s(t) 



(24) 
(25) 

(26) 



The time-derivative of (26) along the state trajectories of time-delay system (1), (4) can be 
calculated as follows: 



170 



Robust Control, Theory and Applications 



V = s T (f)(CB)" 1 s(f) = s r (f)(CB)" 1 Ci:(f) 

= s T (t )(CB)~ 1 C [A x(t ) + Ayx{t -h 1 ) + ... + A N x(t - h N ) 
+ AA x(t) + AA x x(t -hy)+... + AA N x(t -h N ) + Bu(t) + Df(t)] 

= s T (f )(CB) _1 [CA x(t) + CA x x(t -h 1 ) + ... + CA N x(t - h N ) 
+ CBE x(t) + CBE- 1 x(t-h^) + ... + CBE N x(t-h N ) 
-CB((CBy 1 [CA x(t) + CA 1 x(t-h) + ... + CA N x(t-h N )] 



-[k \\x(t)\\ + k 1 \\x(t-h 1 )\\ + .., + k N \\x(t-h N ) 



s(t) 



-Gx{t)-S 



s(t) 



'(Oil. 



-CBEf(t)] 

--s'(t)[E x(t) + E 1 x(t-h 1 ) + ... + E N x(t-h N )] 
- [*b NOI + h \W - h )| + -. + k N \Ht ~ h N ) 
s(t) 



e(t) 

Noll 



-Gx(t)-S- 



s(t)\\ 



■E/(0] 



* -K*b - «o - s) Noll IKol + ft - «i ) H* - h )|IKO| 
+...+(fc N -a N )Ht-ft N )|||s(o|]-(«y-/o)|KO| 

Since (24), (25) hold, then (27) reduces to: 



V = s^fXCB)" 1 ^) < -{S-f )\\s{t)\\ < - 7 ||s(f)|| 



where 



Hence we can evaluate that 



y(f)<-?; 



/7 = <?-/ >0 



^in (CB) 



t#(0 



(27) 



(28) 



(29) 



(30) 



The last inequality (30) is known to prove the finite-time convergence of system (1), (4) 
towards the sliding surface s(t)=0 (Utkin, 1977), (Perruquetti & Barbot, 2002). Therefore, 
Theorem 2 is proved. 



4.3 Numerical examples and simulation 

In order to demonstrate the usefulness of the proposed control design techniques let us 
consider the following examples. 

Example 1: Consider a networked control time-delay system (1), (4) with parameters 

taking from (Cao et al, 2007): 



A = 



[-4 


0" 




-1 


-3 


,A 1 = 



-1.5 

-1 -0.5 



(31) 



Robust Delay-lndependent/Dependent Stabilization of 
Uncertain Time-Delay Systems by Variable Structure Control 



171 



AA = 0.5sin(£)A ,AA 1 = 05cos(t)A 1 ,/ = 0.3sin(£) 

The LMI stability and sliding mode existence conditions are computed by MATLAB 
programming (see Appendix 1) where LMI Control Toolbox is used. The computational 
results are following: 

-0.1811 0.1811" 

0.3189 -0.3189 



AOhat = 



-1.0866 1.0866 
1.9134 -1.9134 



: Alhat = 



Gl = [ 0.9567 1.2933] ; AOtil = 



eigAOhat = 



lhs = 



0.0000 
-3.0000 



: eig Alhat : 



-3.0000 -1.5000 
0.0000 -4.5000 

0.0000 

-0.5000 



-1.8137 0.0020 -0.1392 0.1392 
0.0020 -1.7813 0.1382 -0.1382 
-0.1392 0.1382 -1.7364 0.0010 
0.1392 -0.1382 0.0010 -1.7202 



; eigsLHS : 



; eigAOtil = 



-2.0448 
-1.7952 
-1.7274 
-1.4843 



-3.0000 
-4.5000 



P = 

Rl = 
BTP 



0.6308 -0.0782 . _ 0.3660 
-0.0782 0.3891 J ' Glg " [ 0.6539 

1.7364 -0.0010] T 1.7202 

; eigRl = 
-0.0010 1.7202 J & 1.7365 

[ 1.1052 0.6217] ; BTPB = 3.4538 

invBTPB = 0.2895; normGl = 1.6087 

k0= 2.1087; kl=0.5; 8 > 0.3; H< 0; 
The networked control time-delay system is robustly asymptotically delay-independent 
stable. 

Example 2: Consider a time-delay system (1), (4) with parameters: 



"-1 


0.7" 


,A 1 = 


"0.1 


0.1" 


, A 2 = 


"0.2 


" 


,B = 


"1" 


[o.3 


1 


L u 


0.2 


L u 


0.1 J 




1 



hy = 0.1 , h 2 = 0.2 



(32) 



AA n 



0.2sin(f) 

O.lsin(f) 



,AAt 



O.lcos(f) 

0.2cos(f) 



Matching condition for external disturbances is given by: 



D = BE-- 



,AA 



0.2cos(f) 

O.lcos(f) 



0.2cost; /(f) = 0.2 cos f 



The LMI stability and sliding mode existence conditions are computed by MATLAB 
programming (see Appendix 2) where LMI Control Toolbox is used. The computational 
results are following: 



172 



Robust Control, Theory and Applications 



AOhat = 



-0.3947 -0.0911' 
0.9053 0.2089 



; Alhat = 



-0.0304 -0.0304' 
0.0696 0.0696 



;A2hat = 



0.0607 
-0.1393 



-0.0304 
0.0696 



Geq=[ 0.6964 0.3036]; G =[ -4.5759 12.7902] 



AOtil = 



4.1812 -12.8812 
5.4812 -12.5812 



lhs = 



eigAOhat : 

-0.7085 
-0.5711 

-0.0085 0.0084 
-0.0085 0.0084 
0.0169 



-0.1858 
0.0000 

0.5711 -0.0085 

0.8257 0.0084 

■1.0414 

■0.2855 

0.0167 



; eigAOtil 
eig Alhat ; 



-4.2000 + 0.60001' 
-4.2000 - 0.60001 




0.0393 



eigA2hat : 



-0.0085 0.0169 
0.0084 -0.0167 



P= 



-0.0085 0.0084 

2.0633 0.778r 
0.7781 0.4592 







: eigP= 



-0.2855 
-1.1000 





0.1438^ 

2.3787 






-1.0414 
-0.2855 



:R1= 



-0.0085 

0.0084 




-0.2855 
-1.1000 

1.0414 0.2855 
0.2855 1.1000 



; eigsLHS = 



;R2= 



0.0000 
0.1304 

-1.3581 
-1.3578 
-1.3412 
-0.7848 
-0.7837 
-0.1916 

1.0414 0.2855 
0.2855 1.1000 



eigRl 



0.7837' 
1.3578 



: eigR2 = 



0.7837' 
1.3578 



BTP= [ 2.8414 1.2373] ; BTPB = 4.0788 

invBTPB= 0.2452; normG = 13.5841 

a = 0.2; a x = 0.2; a 2 = 0.2 ; d = max|D| = 0.2 ; f Q = max|/(f)| = 0.2828 ; 

k0=13.7841; kl=0.2; k2=0.2; 8 > 0.2828; H< 0; 
Thus, we have designed all the parameters of the combined sliding mode controller. 
Aircraft control design example 3: Consider the lateral-directional control design of the DC- 
8 aircraft in a cruise-flight configuration for M = 0.84, h = 33.000ft, and V = 825ft/ s with 
nominal parameters taken from (Schmidt, 1998): 



-0.228 2.148 -0.021 0.0 

-1.0 -0.0869 0.0 0.0390 

0.335 -4.424 -1.184 0.0 

0.0 0.0 1.0 0.0 



-1.169 


0.065" 




0.0223 


0.0 


\ Sr ] 


0.0547 


2.120 


$a 


0.0 


0.0 





(33) 



where j3 is the sideslip angle, deg., p is the roll rate, deg/s, <j> is the bank angle, deg., r is 
the yaw rate, deg/s, S r is the rudder control, S a is the aileron control. However, some small 
transient time-delay effect in this equation may occur because of influence of sideslip on 
aerodynamics flow and flexibility effects of aerodynamic airframe and surfaces in lateral- 
directional couplings and directional-lateral couplings. The gain constants of gyro, rate gyro 
and actuators are included in to lateral directional equation of motion. Therefore, it is 
assumed that lateral direction motion of equation contains some delay effect and perturbed 
parameters as follows: 



Robust Delay-lndependent/Dependent Stabilization of 
Uncertain Time-Delay Systems by Variable Structure Control 



173 



-0.002 0.0 ' 

0.0 0.004 

0.034 -0.442 

0.0 0.0 



(34) 



AA = 0.1 Aq sin(f) , AA 1 = 0.1 A 1 cos(f) ,D = I±;f = 0.2sin(f) ; \ = 0.01 - 0.04s 

The LMI stability and sliding mode existence conditions are computed by MATLAB 
programming (see Appendix 3) where LMI Control Toolbox is used. The computational 
results are following: 



AOhat = 



G = 



"-0.0191 -0.0008 0.0000 0.0007" 






"-0.0000 0.0000 -0.0000 


0.0001 


-1.0042 -0.0434 0.0003 0.0390 
0.0006 0.0000 -0.0000 -0.0000 


; Alhat = 


-0.0000 0.0003 -0.0000 0.0040 
0.0000 -0.0000 0.0000 -0.0000 


1.0000 







"-0.8539 0.0163 0.0262 0" 
0.0220 -0.0001 0.4710 


;Gl = 


"-0.5925 0.0890 0.1207 0.0501" 
0.0689 -0.0086 0.3452 0.0485 





AOtil = 



eigAlhat= 1.0e-003* 



P = 



Rl = 



-0.7162 0.1038 0.1187 0.0561 
-0.9910 -0.0454 -0.0024 0.0379 
-0.1130 0.0134 -0.7384 -0.1056 

1.0000 

eigAOtil = [-0.5+0.0821 -0.5-0.082i -0.3 -0.2] 
eigAOhat = [-0.0621 -0.0004 -0.0000 -0.0000] 
0.2577 

-0.0000 + 0.00001 
-0.0000 - 0.00001 

72.9293 39.4515 -2.3218 24.7039^ 
39.4515 392.5968 10.8368 -1.4649 
-2.3218 10.8368 67.2609 -56.4314 
24.7039 -1.4649 -56.4314 390.7773 
52.5926 29.5452 0.3864 2.5670 
29.5452 62.3324 3.6228 -0.4852 
0.3864 3.6228 48.3292 -32.7030 
•0.4852 -32.7030 61.2548 

-84.5015 -36.7711 6.6350 -31.9983 
-0.1819 25.5383 142.4423-118.0289 



2.5670 
BTP = 



; eigP= [57.3353 66.3033 397.7102 402.2156] 



; eigRl = [21.3032 27.3683 86.9363 88.9010] 



BTPB = 



98.3252 8.5737 ' 
8.5737 301.9658 



; invBTPB = 



0.0102 
-0.0003 



-0.0003' 
0.0033 



174 



Robust Control, Theory and Applications 



morm = 0.85451hs = 



-41.4566 -29.8705 -0.6169 -2.3564 -0.0008 0.0105 -0.0016 0.1633' 

-29.8705 -51.6438 -3.8939 0.8712 -0.0078 0.1015 -0.015 1.5728 

-0.6169 -3.8939 -38.2778 32.1696 -0.0002 0.0028 -0.0004 0.043 

-2.3564 0.8712 32.1696 -51.6081 -0.0002 -0.0038 

-0.0008 -0.0078 -0.0002 -52.593 -29.545 -0.3864 -2.567 

0.0105 0.1015 0.0028 -0.0002 -29.545 -62.333 -3.6228 0.4852 

-0.0016 -0.015 -0.0004 -0.3864 -3.6228 -48.33 32.703 

0.1633 1.5728 0.043 -0.0038 -2.567 0.4852 32.703 -61.255 



eigsLHS = 



-88.9592 
-86.9820 
-78.9778 
-75.8961 
-27.3686 
-21.3494 
-16.0275 
-11.9344 



k0= 1.0545; kl=0.5; 8 > 0.2; H< 0; 

Thus, we have designed all the parameters of the aircraft control system and the uncertain 
time-delay system (1), (4) with given nominal (33) and perturbed (34) parameters are 
simulated by using MATLAB-SIMULINK. The SIMULINK block diagram of the uncertain 
time-delay system with variable structure contoller (VSC) is given in Fig. 1. Simulation 
results are given in Fig. 2, 3, 4 and 5. As seen from the last four figures, system time 
responses to the rudder and aileron pulse functions (0.3 within 3-6 sec) are stabilized very 
well for example the settling time is about 15-20 seconds while the state time responses of 
aircraft control action as shown in Fig. 5 are unstable or have poor dynamic characteristics. 
Notice that, as shown in Fig. 4, control action contains some switching, however it has no 
high chattering effects because the continuous terms of controller are dominant. 
Numerical examples and simulation results show the usefulness and effectiveness of the 
proposed design approach. 

5. Robust delay-dependent stabilization 

In this section, the existence condition of the sliding manifold and delay-dependent stability 
analysis of perturbed sliding time-delay systems are presented. 



5.1 Robust delay-dependent stabilization on the sliding surface 

In this section the sliding manifold is designed so that on it or in its neighborhood in 
different from existing methods the perturbed sliding time-delay system (16) is globally 
asymptotically stable with respect to state coordinates. The stability results are formulated 
in the following theorem. 



Robust Delay-lndependent/Dependent Stabilization of 
Uncertain Time-Delay Systems by Variable Structure Control 



175 



Time-delay system 



signal 1 



Signal BuildeC 




Fig. 1. SIMULINK block diagram of uncertain time-delay system with VSC 



176 



Robust Control, Theory and Applications 



0.8 

0.6 

§0.4 

^ 0.2 






[beta p 


) phi r] 








i i i 

...A ! ! 










i 


/ ! 












/■ M 












I " i 




















i 







10 15 20 25 30 35 40 45 s 50 



Fig. 2. States' time responses with control 



[s-| s 2 ] 



i i i I i 




i 1 








\\[yY \ \ \ 






['■/'■■ ■ \ 






V / \ . . 






i i i I i 




i I 



Fig. 3. Sliding functions 



10 15 20 25 30 35 40 45 s 50 



[u R u fl ] 







i 








1 1 
















Ufl /y 


u * \ 
















^"^Jtaj&UHJ 






HtaOtfflttHfl: 


iOiaiaixitai)t*aauaatiaaatiaainH 
















x 


















i 








i i 



0.G 
0.4 
0.2 

■0.2 
■0.4 

5 10 15 20 25 30 35 40 45 s 50 

Fig. 4. Control functions 



Robust Delay-lndependent/Dependent Stabilization of 
Uncertain Time-Delay Systems by Variable Structure Control 



177 



1.5 
1 






[beta 


n phi r] 










! I ' 






0.5 



riR 














pP/V 


r ■ f^" 








i i 


I i 











10 



15 












35 



40 



45 s 50 



Fig. 5. States' time responses without control 

Theorem 3: Suppose that Assumption 1 holds. Then the transformed multivariable 
sliding time-delay system (16) with matched parameter perturbations and external 
disturbances driven by combined controller (4) and restricted to the sliding surface 
s(t)=0 is robustly globally asymptotically delay-dependent stable with respect to the 
state variables, if the following modified LMI conditions and parameter requirements 
are satisfied: 



H v 



H-- 



PA t A 


-PA{ 




-PA,A N 


-PA N A 


~PA N 


A, .. 


• -pK 














h l 

































h i 



































1 

h N 





































1 s 





























































-T 7 



:0 (35) 







where 

H 11 =(A 0+ A 1 +...+A N ) T P + P(A 0+ A 1 +...+A N ) + h 1 (S 1 +RO+...+h N (S N +R N )+T 1 +^ 



CB = B 1 PB>0 



k -a§)k x -a 1 ',...k A 



(36) 
(37) 



178 



Robust Control, Theory and Applications 



S^fo 



(38) 



where P,R 1 ,...R N are some symmetric positive definite matrices which are a feasible 
solution of modified LMI (35) with (36); A = A -BG is a stable matrix. 

Proof: Let us choose a special augmented Lyapunov-Krasovskii functional as follows: 



N t 

V = x T (t)Px(t) + J] j* j* x T (p)R i x(p)dp d6 

i=l-hi t+e 

N t n t 

+1 J J x T (p)S i x(p)dpd0^ J x T {0)T i x{e)dO 



(39) 



The introduced special augmented functional (39) involves three particular terms: first term 
Vi is standard Lyapunov function, second and third are non-standard terms, namely V2 and 
V3 are similar, except for the length integration horizon [t-h, t] for V2 and [t+6-h, t] for V3, 
respectively. This functional is different from existing ones. 
The time-derivative of (39) along the perturbed time-delay system (16) can be calculated as: 

V = x T (t)[(A 0+ A 1+ ... + A N ) T P + P(A 0+ A 1+ ... + A N ) 

+ h 1 (S 1 +R 1 ) + ... + h N (S N +R N ) + T 1 +... + T N x(t)] 

t t t 

-2x T (t)PA t A j" x(0)dO-2x T (t)PAl j" x(0-h 1 )d0-...-2x T (t)PA 1 A N J x(0-h N )dO 

t-hi t-hi t-hi 

t t t 

-...-2x T (t)PA N A J x(6)d6-2x T (t)PA N A 1 J x(0-h 1 )d0-...-2x T (t)PAl j" x{6-h N )dO 
t-h N t-h N t-h N 

t t 

-\ J x T (0)R 1 x(0)d0-...-h N J x T (6 )R N x(0 )d0 



t-h, 
t 



-\ J x T (6-\ )S 1 x(6-\ )d0-...-h N j x T (6>-h N )S N x(0-h N )d0 

t-K t-h N 

-x T (t-h l )T 1 x(t-h 1 )-x T (t-h N )T N x(t-h N ) 



(40) 



+2x T (f)PzlA 1 x(f-/z 1 ) + ... + 2x T (f)PzlA N x(f-/z ]V )-2x T (f)PB[/c |x(f)| + /c 1 |x(f-/z 1 ) 



+ .., + ^|x(f-/z N )|]^-2x T (0PB^^ + x T (f)PD/(0 
Since for some h>0 Noldus inequality holds: 



t 



\ J x T {6)R 1 x{e)de> 



t-h t 

f 



j x(0 )d0 



t-H-y 



h N J x T (6>-h N )S N x(6>-h N )d6>> 



t-h 



N 



R 1 j x(0 )d0 

t-H-y 

t 

J x(0-h N )d0 



(41) 



j x(0-h N )d0 



Robust Delay-lndependent/Dependent Stabilization of 
Uncertain Time-Delay Systems by Variable Structure Control 



179 



and x T (t)PB = s T (t) then (40) becomes as: 



V<x T (t)[(A +A 1 +... + A N ) T P + P(A +A 1 +... + A N ) + h 1 (S 1 +R 1 ) + ... + h N (S N 

t t 

-2x T (0PA 1 A J x{0)d0-2x T {t)PA\ J x(0-h 1 )d0 

_ _ t _ _ t 

-...-2x T (t)PA 1 A N J x(0 -h N )d0 - ...- 2x T (t)PA N A J x(0)d0 

t-hy t-h N 

t t 1 
-2x T (t)PA N A 1 J x(0-h 1 )de-...-2x T (t)PAl J x(0-h N )d0 



t-h N 



t-h N 



J x{0)dO 



J x(0-h N )d0 



J x(6>)^ 



J x(0-h 1 )d0 



J x(<9)d<9 



J x(0-h 1 )d0 



J x(0)d0 

t-hy 



| x(0-h N )d0 



-x T (t-h 1 )T 1 x(t-h 1 )-x T (t-h N )T N x(t-h N ) 



+2x J \t)PBE x x(t -h 1 ) + ... + 2x T (t)PBE N x(t -h N )- 2x T (t)PB[k \\x(t)\\ + k x \\x(t - h x )\ 
+ .., + ^ N |^-/z N )|]^-2x T (0PB^^ + x T (0PBE/(0 

<x r (0[(A +A 1+ ... + A N ) r P + P(A +^^ 

t t t 

-2x T (t)PA 1 A J x(6)d6-2x T (t)PAl J x(0-h 1 )d0 - ...-2x T (t)PA t A N J x(0-h N )d0 

t-hy t-hy t-hy 

_ _ t _ _ t t 

-...-2x T (t)PA N A J x(0)d0-2x T (t)PA N A 1 J x(0-h 1 )d0 - ...-2x T (t)PA^ J x(0-h N )d0 



t-h N 



t-h N 



t-h N 



1_ 



J x(6>)^ 



T 


t 






t 




T 


t 


- 


R x 


J x(0)d0 


1 


J x{0)d0 


R N 


J x(0)d0 




L*-^ J 


J-h N 




J-h N 


-i 


T 


i 




t 


T r 


)d0 


s a 


J x(0-h 1 )d0 


1 

h N 


J x(0-h N )d0 


$N 






t-hy 




J-h„ 









j x(0-h N )d0 



-x T (t-h 1 )T 1 x(t-h 1 )-x T (t-h N )T N x(t-h N ) + 2s T (t)E 1 x(t-h 1 ) + ... + 2s(t)E N x(t-h N ^ 



-2s T (t) [k \\x(t)\\ + k x \\x(t - hy )|| + ... . + k N \\x(t - h h 



^ 2Ss T (t)^ + s T (t)Ef(t) 

\W)\\ \W)\\ 



x(t) J x(0)d0 j x{0-h r )d0 j x(0-h N )d0 j x(0)d0 J x{0 -h r )d0 j x(0-h N )d0 x{t-h r ) x(t-h N ) 



t-h, t-h. 



t-h N t-h N 



180 



Robust Control, Theory and Applications 

















x(t) 




Hi, 


-PA.A, -PAl - 


-PA-^A N —PA N A Q —PAjyA-^ 


-^ 











t 

| x(6)d6 

t-h x 




* 


-J-R, 
hi 

















t 

| x{e-h x )&0 




* 


-— S] 
ha 















t-h x 






















\ x(9-h N )d9 




* 





1 

-t-Rn 

h N 













t-h x 

t 




* 















| x(0)dO 




* 




















t 

| x{e-\)dG 




* 


















t-h N 






















-T 2 

o '-. 






t 

| x(0-h N )dO 

t-h N 






















-v 


x(t-h t ) 
x(t-h N ) 




-K*o 


-a )|x(0||s(0| + (^ 


-a 1 )|^-fi 1 )||sW| + ... + (fc N - 


-a N )\\x(t 


-h 


n)\\\W)\\] 


-(S 


-fo)\W)\\ (42) 


Since 


(35)-(38) hold, then 


(42) reduces to: 

V<z T (0Hz(0< 











(43) 



Therefore, we can conclude that the perturbed time-delay system (16), (4) is robustly 
globally asymptotically delay-dependent stable. Theorem 3 is proved. 

Special case: Single state-delayed systems: For single state-delayed systems that are 
frequently encountered in control applications and testing examples equation of motion 
and control algorithm can be easily found from (1), (4), (16) letting N=l. Therefore, the 
modified LMI delay-dependent stability conditions for which are significantly reduced 
and can be summarized in the following Corollary. 

Corollary 1: Suppose that Assumption 1 holds. Then the transformed single-delayed 
sliding system (16) with matched parameter perturbations and external disturbances 
driven by combined controller (4) for which N=l and restricted by sliding surface s(t)=0 
is robustly globally asymptotically delay-dependent stable with respect to the state 
variables, if the following LMI conditions and parameter requirements are satisfied: 



H-- 



( A)+ A 1 fP + P(A 0+ A 1 ) 2 

+h l (S 1 +R L ) + T 1 



-(PA,A Q ) T 

-(PA\f 




o 





Ri 



o 



K 
o 



o 
o 



<0 



(44) 



Robust Delay-lndependent/Dependent Stabilization of 

Uncertain Time-Delay Systems by Variable Structure Control 181 

CB = B T PB > (45) 

k =a ;k 1 =a 1 ; (46) 

S>f (47) 

Proof: The corollary follows from the proof of the Theorem 3 letting N=l. 

5.2 Existence conditions 

The final step of the control design is the derivation of the sliding mode existence conditions 
or the reaching conditions for the perturbed time-delay system states to the sliding manifold 
in finite time. These results are summarized in the following theorem. 

Theorem 4: Suppose that Assumption 1 holds. Then the perturbed multivariable time- 
delay system (1) states with matched parameter uncertainties and external disturbances 
driven by controller (4) converge to the siding surface s(t)=0 in finite time, if the 
following conditions are satisfied: 

k =a + g)k x = a 1 ;... / k N = a N ; (48) 

S>f (49) 

Proof: Let us choose a modified Lyapunov function candidate as: 

V=h T (t)(CB)-'s(t) (50) 

The time-derivative of (50) along the state trajectories of time-delay system (1), (4) can be 
calculated as follows: 

V = s T (0(CB)" 1 s(0 = s T (0(CB)" 1 Cx(0 = s T (f)(CB)" 1 C[A x(0 + A x x{t - \) 

+ ... + A N x(t -h N ) + AA x(t) + AA x x(i - h 1 )+... + AA N x(t -h N ) + Bu(t) + Df(t)] 
= s T (0(CB) -1 [CA x(f) + CA 1 x(f-/z 1 ) + ... + CA N x(f-/z N ) + CBE x(f) + CBE 1 x(f-/z 1 ) 
+ ... + CBE N x(t-h N )-CB((CB)~ 1 [CA x(t) + CA 1 x(t-h) + ... + CA N x(t-h N )] 

- [*b HOII + *i lk(* - ^i)ll + «.. + ^ lk(* - ^)ll] FTSir - Gx (*)-^ FTSii I + c BE /(*)] ( 51 ) 

KOI IKOIU 



= s T (t)[E x(t) + Etfit -h 1 ) + ... + E N x(t - h 



N) 



- [*o \\ X (t)\\ + k \\ X (t - h,)\\ + .... + k N \\ X (t - MOirSn Gx(f) " ^ftt^ + E W\ 

KOI K0| 

^ -[(^o - «o - ^)||^(^)||IN(^)|| + (^a - «i)||^(^ - ^a)||||^(0|| 

+ ... + (^-^)||x(t-/z^)||||s(0||]-(^-/ )N0|| 
Since (48), (49) hold, then (51) reduces to: 

V = s T (0(CB)- a s(0 < -{5 - f )\\s(t)\\ < -r,\\s(t)\\ (52) 



182 



Robust Control, Theory and Applications 



where 



Hence we can evaluate that 



rj = S-f >0 



(53) 



V(t): 



Anin (CB)" 1 



V(t) 



(54) 



The last inequality (54) is known to prove the finite-time convergence of system (1),(4) 
towards the sliding surface s(t)=0 (Utkin, 1977), (Perruquetti & Barbot, 2002). Therefore, 
Theorem 4 is proved. 

5.3. Numerical examples 

In order to demonstrate the usefulness of the proposed control design techniques let us 
consider the following examples. 

Example 4: Consider a time-delay system (1),(4) with parameters taken from (Li & De 

Carlo, 2003): 



2 





1 " 




1.75 


0.25 


0.8 


;A 1 = 


-1 





1 





-1 





" 




"0" 


-0.1 


0.25 


0.2 


;B = 





-0.2 


4 


5 




1 



AA = 0.2sin(f)A / AA 1 = 02cos(t)A 1 ,/ = 0.3sin(f) 

The LMI delay-dependent stability and sliding mode existence conditions are computed by 
MATLAB programming (see Appendix 4) where LMI Control Toolbox is used. The 
computational results are following: 

[ 1.2573 2.5652 1.0000] 



AOhat = 



eigAOhat : 



Geq 

2.0000 1.0000 " 

1.7500 0.2500 0.8000 

-7.0038 -0.6413 -3.3095 



: Alhat = 



-1.0000 
-0.1000 0.2500 0.2000 
1.5139 -0.6413 -0.5130 



-0.5298 + 0.53831 
-0.5298 - 0.53831 
0.0000 



; eigAlhat = [ -0.2630 -0.0000 -1.0000] 



AOtil = 



G= [3.3240 10.7583 3.2405]; Geq =[ 1.2573 2.5652 1.0000] 

; eigAOtil = 



2.0000 1.0000 
1.7500 0.2500 0.8000 
-10.3278 -11.3996 -6.5500 



-2.7000 

-0.8000 + 0.50001 

-0.8000 - 0.50001 



P= 1.0e+008* 



1.1943 -1.1651 0.1562 
-1.1651 4.1745 0.3597 
0.1562 0.3597 0.1248 



;R1= 1.0e+008* 



1.9320 0.2397 0.8740 
0.2397 1.0386 0.2831 
0.8740 0.2831 0.4341 



Robust Delay-lndependent/Dependent Stabilization of 
Uncertain Time-Delay Systems by Variable Structure Control 



183 



Sl= 1.0e+008* 



lhs = 1.0e+008 * 



0.8783 0.1869 0.2951 
0.1869 1.0708 0.2699 
0.2951 0.2699 0.1587 



;T1= 1.0e+007* 



-1.1632 


0.4424 


-0.1828 


0.1743 


-0.1030 


0.1181 


-0.4064-0.1030 


-0.0824 








0.4424 


-1.6209 


-0.1855 


0.5480 


0.2138 


0.2098 


0.3889 0.2138 


0.1711 








-0.1828 


-0.1855 


-0.0903 


0.0445 


0.0026 


0.0215 


-0.0142 0.0026 


0.0021 








0.1743 


0.5480 


0.0445 


-1.9320 


-0.2397 


-0.8740 














-0.1030 


0.2138 


0.0026 


-0.2397 


-1.0386 


-0.2831 














0.1181 


0.2098 


0.0215 


-0.8740 


-0.2831 


-0.4341 














-0.4064 


0.3889 


-0.0142 











-0.8783 -0.1869 


-0.2951 








-0.1030 


0.2138 


0.0026 











-0.1869 -1.0708 


-0.2699 








-0.0824 


0.1711 


0.0021 











-0.2951 -0.2699 


-0.1587 





























-0.2362 


0.0730 























0.0730 


-0.7576 























-0.0726 


-0.1159 



2.3624 -0.7303 0.7264 
-0.7303 7.575S 1.1589 
0.7264 1.1589 0.4838 




















-0.0726 
-0.1159 
-0.0484 



maxhl = 1; eigsLHS = 1.0e+008 ' 



-2.8124 
-2.0728 
-1.0975 
-0.9561 
-0.8271 
-0.7829 
-0.5962 
-0.2593 
-0.0216 
-0.0034 
-0.0000 
-0.0000 



; NormP = 4.5946e+008 



G = [ 3.3240 10.7583 3.2405] ; NormG = 11.7171 
invBtPB= 8.0109e-008; BtP = 1.0e+007*[ 1.5622 3.5970 1.2483] 



eigP = 1.0e+008 * 



eigSl = 1.0e+008 * 



0.0162 
0.8828 
4.5946 

"0.0159" 
0.7770 
1.3149 



; eigRl = 1.0e+008 ' 



; eigTl = 1.0e+007 * 



0.0070 
0.9811 
2.4167 

0.0000^ 

2.5930 

7.8290 



184 



Robust Control, Theory and Applications 



k0= 11.9171; kl=0.2; 8 > 0.3; H<0 

Considered time-delay system is delay-dependently robustly asymptotically stable for all 
constant delays h < 1 . 

Example 5: Now, let us consider a networked control time-delay system (1), (4) with 

parameters taken from (Cao et al., 2007): 



"-4 


0" 


,A 1 = 


-1.5 





,B = 


~2 


[-1 


-3 


L _1 


-0.5 




L 2 J 



A = 

AA =0.5sin(t)A ,AA 1 =05cos(t)A 1 ,f = 0.3sin(£) 

The LMI delay-dependent stability and sliding mode existence conditions are computed by 
MATLAB programming (see Appendix 5) where LMI Control Toolbox is used. The 
computational results are following: 



maxhl = 2.0000; Geq = [ 0.4762 0.0238] 



AOhat = 



-0.1429 0.1429 

2.8571 -2.8571 



; Alhat = 



-0.0238 0.0238 
0.4762 -0.4762 



eigAOhat : 



-0.0000 
-3.0000 



AOtil = 



-4.1429 -0.0571 
-1.1429 -3.0571 



; eigAlhat 
; eigAOtil = 



-0.0000 
-0.5000 

-4.2000 
-3.0000 



P = 1.0e+004* 



5.7534 -0.1805 
-0.1805 0.4592 



; Rl = 1.0e+004 * 



8.4457 -0.2800 
-0.2800 0.6883 



SI = 1.0e+004 * 



7.7987 0.2729 
0.2729 0.1307 



; Tl = 1.0e+004 * 



lhs = 1.0e+004* 



6.7803 0.3390 
0.3390 0.0170 



-8.4351 1.2170 -0.6689 0.6689 -0.1115 0.1115 



1.2170 -1.5779 0.6689 

-0.6689 0.6689 -4.2228 

0.6689 -0.6689 0.1400 

-0.1115 0.1115 

0.1115 -0.1115 







-0.6689 0.1115 

0.1400 
-0.3442 











0.1115 



-3.8994 -0.1364 
-0.1364 -0.0653 


















-6.7803 
-0.3390 














-0.3390 
-0.0170 



Robust Delay-lndependent/Dependent Stabilization of 
Uncertain Time-Delay Systems by Variable Structure Control 



185 



eigsLHS = 1.0e+004 * 



-8.8561' 
-6.7973 
-4.1971 
-3.9040 
-1.4904 
-0.0971 
-0.0000 
-0.0000 



NormP = 5.7595e+004; G = [ 2.0000 0.1000] 



NormG = 2.0025; invBtPB = 4.2724e-006; BtP = 1.0e+005 * [1.1146 0.0557] 



eigsP = 1.0e+004 * 



0.4530 
5.7595 



; eigsRl = 1.0e+004 * 



0.6782 
8.4558 



eigsSl = 1.0e+004 * 



0.1210" 
7.8084 



eigsTl = 1.0e+004* 



0.0000" 
6.7973 



k0= 2.5025; kl=0.5; 8 > 0.3; H< 

The networked control time-delay system is robustly asymptotically delay-dependent stable 
for all constant time-delays h < 2.0000 . 

Thus, we have designed all the parameters of the combined sliding mode controller. 
Numerical examples show the usefulness of the proposed design approach. 

6. Conclusion 

The problem of the sliding mode control design for matched uncertain multi-input systems 
with several fixed state delays by using of LMI approach has been considered. A new 
combined sliding mode controller has been proposed and designed for the stabilization of 
uncertain time-delay systems with matched parameter perturbations and external 
disturbances. Delay-independent and delay-dependent global stability and sliding mode 
existence conditions have been derived by using Lyapunov-Krasovskii functional method 
and formulated in terms of linear matrix inequality techniques. The allowable upper bounds 
on the time-delay are determined from the LMI stability conditions. These bounds are 
independent in different from existing ones of the parameter uncertainties and external 
disturbances. 

Five numerical examples and simulation results with aircraft control application have 
illustrated the usefulness of the proposed design approach. 
The obtained results of this work are presented in (Jafarov, 2008), (Jafarov, 2009). 



7. Appendices 
A1 

clear; 
clc; 



186 Robust Control, Theory and Applications 

A0=[-4 0;-l -3]; 

Al=[-1.5 0;-1 -0.5]; 

B=[2; 2]; 

setlmis([]) 

P =lmivar(l,[2 1]); 

Rl=lmivar(l,[2 1]); 

Geq=inv(B'*P*B)*B'*P 

A0hat=A0-B*G*A0 

Alhat=Al-B*G*Al 

G= place(A0hat,B,[-4.5 -3]) 

A0til=A0hat-B*Gl 

eigA0til=eig(A0til) 

eigA0hat=eig(A0hat) 

eigAlhat=eig(Alhat) 

ii = 1; 

lmiterm([-l 1 1 P],ii,ii) 

lmiterm([-2 1 1 Rl],ii,ii) 

lmiterm([4 1 1 P],l,A0tir;s') 

lmiterm([4 1 1 Rl],ii,ii) 

lmiterm([4 2 2 Rl],-ii,ii) 

lmiterm([4 1 2 P],l,Alhat) 

LMISYS=getlmis; 

[copt,xop t] =f easp (LMISYS); 

P=dec2mat(LMISYS,xopt,P); 

Rl=dec2mat(LMISYS,xopt,Rl); 

evlmi=evallmi(LMISYS,xopt); 

[lhs,rhs] =showlmi(evlmi,4); 

lhs 

P 

eigP=eig(P) 

Rl 

eigRl=eig(Rl) 

eigsLHS=eig(lhs) 

BTP=B'*P 

BTPB=B'*P*B 

invBTPB=inv(B'*P*B) 

% recalculate 

Geq=inv(B'*P*B)*B , *P 

A0hat=A0-B*G*A0 

Alhat=Al-B*G*Al 

G= place(A0hat,B,[-4.5 -3]) 

A0til=A0hat-B*Gl 

eigA0til=eig(A0til) 

eigA0hat=eig(A0hat) 

eigAlhat=eig(Alhat) 



Robust Delay-lndependent/Dependent Stabilization of 

Uncertain Time-Delay Systems by Variable Structure Control 1 87 

ii = 1; 

setlmis([]) 

P =lmivar(l,[2 1]); 

Rl=lmivar(l,[2 1]); 

R2=lmivar(l,[2 1]); 

lmiterm([-l 1 1 P],ii,ii) 

lmiterm([-2 1 1 Rl],ii,ii) 

lmiterm([4 1 1 PJ^AOtil'/s') 

lmiterm([4 1 1 Rl],ii,ii) 

lmiterm([4 2 2 Rl],-ii,ii) 

lmiterm([4 1 2 P],l,Alhat) 

LMISYS=getlmis; 

[copt,xop t] =f easp (LMISYS); 

P=dec2mat(LMISYS,xopt,P); 

Rl=dec2mat(LMISYS,xopt,Rl); 

evlmi=evallmi(LMISYS,xopt); 

[lhs,rhs] =showlmi(evlmi,4); 

lhs 

P 

eigP=eig(P) 

Rl 

eigRl=eig(Rl) 

eigsLHS=eig(lhs) 

BTP=B'*P 

BTPB=B'*P*B 

invBTPB=inv(B'*P*B) 

normGl = norm(Gl) 

A2 

clear; 

clc; 

A0=[-1 0.7; 0.31]; 

Al=[-0.1 0.1; 0.2]; 

A2=[0.2 0;0 0.1]; 

B=[l; 1] 

setlmis([]) 

P=lmivar(l,[21]); 

Rl=lmivar(l,[2 1]); 

R2=lmivar(l,[2 1]); 

Geq=inv(B , *P*B)*B'*P 

A0hat=A0-B*G*A0 

Alhat=Al-B*G*Al 

A2hat=A2-B*G*A2 

G= place(A0hat,B,[-4.2-.6i -4.2+.6i]) 

A0til=A0hat-B*Gl 



188 



Robust Control, Theory and Applications 



eigA0til=eig(A0til) 

eigA0hat=eig(A0hat) 

eigAlhat=eig(Alhat) 

eigA2hat=eig(A2hat) 

ii = l; 

lmiterm([-l 1 1 P],ii,ii) 

lmiterm([-2 1 1 Rl],ii,ii) 

lmiterm([-3 1 1 R2],ii,ii) 

lmiterm([4 1 1 P],l,A0tir;s') 

lmiterm([4 1 1 Rl],ii,ii) 

lmiterm([4 1 1 R2],ii,ii) 

lmiterm([4 2 2 Rl],-ii,ii) 

lmiterm([4 1 2 P],l,Alhat) 

lmiterm([4 1 3 P],l,A2hat) 

lmiterm([4 3 3 R2],-ii,ii) 

LMISYS=getlmis; 

[copt,xop t] =f easp (LMISYS); 

P=dec2mat(LMISYS,xopt,P); 

Rl=dec2mat(LMISYS,xopt,Rl); 

R2=dec2mat(LMISYS,xopt,R2); 

evlmi=evallmi(LMISYS,xopt); 

[lhs,rhs] =showlmi(evlmi,4); 

lhs 

eigsLHS=eig(lhs) 

P 

eigP=eig(P) 

Rl 

R2 

eigRl=eig(Rl) 

eigR2=eig(R2) 

BTP=B'*P 

BTPB=B'*P*B 

invBTPB=inv(B'*P*B) 

% recalculate 

Geq=inv(B'*P*B)*B , *P 

A0hat=A0-B*G*A0 

Alhat=Al-B*G*Al 

A2hat=A2-B*G*A2 

G= place(A0hat,B,[-4.2-.6i -4.2+.61]) 

A0til=A0hat-B*Gl 

eigA0til=eig(A0til) 

eigA0hat=eig(A0hat) 

eigAlhat=eig(Alhat) 

eigA2hat=eig(A2hat) 

ii = 1; 



Robust Delay-lndependent/Dependent Stabilization of 

Uncertain Time-Delay Systems by Variable Structure Control 1 89 

setlmis([]) 

P =lmivar(l,[2 1]); 

Rl=lmivar(l,[2 1]); 

R2=lmivar(l,[2 1]); 

lmiterm([-l 1 1 P],ii,ii) 

lmiterm([-2 1 1 Rl],ii,ii) 

lmiterm([-3 1 1 R2],ii,ii) 

lmiterm([4 1 1 P],l,A0tir;s') 

lmiterm([4 1 1 Rl],ii,ii) 

lmiterm([4 1 1 R2],ii,ii) 

lmiterm([4 2 2 Rl],-ii,ii) 

lmiterm([4 1 2 P],l,Alhat) 

lmiterm([4 1 3 P],l,A2hat) 

lmiterm([4 3 3 R2],-ii,ii) 

LMISYS=getlmis; 

[copt,xop t] =f easp (LMISYS); 

P=dec2mat(LMISYS,xopt,P); 

Rl=dec2mat(LMISYS,xopt,Rl); 

R2=dec2mat(LMISYS,xopt,R2); 

evlmi=evallmi(LMISYS,xopt); 

[lhs,rhs] =showlmi(evlmi,4); 

lhs 

eigsLHS=eig(lhs) 

P 

eigP=eig(P) 

Rl 

R2 

eigRl=eig(Rl) 

eigR2=eig(R2) 

BTP=B'*P 

BTPB=B'*P*B 

invBTPB=inv(B'*P*B) 

normGl = norm(Gl) 

A3 

clear; 

clc; 

A0= [-0.228 2.148 -0.021 0; -1 -0.0869 0.039; 0.335 -4.424 -1.184 0; 10]; 

A1=[0 -0.002 0; 0.004; 0.034-0.442 0; 0]; 

B =[-1.169 0.065; 0.0223 0; 0.0547 2.120; 0]; 

setlmis([]) 

P =lmivar(l,[4 1]); 

Rl=lmivar(l,[4 1]); 

G=inv(B'*P*B)*B , *P 

A0hat=A0-B*G*A0 



190 



Robust Control, Theory and Applications 



Alhat=Al-B*G*Al 

Gl= place(A0hat,B,[-.5+.082i -.5-.0821 -.2 -.3]) 

A0til=A0hat~B*Gl 

eigA0til=eig(A0til) 

eigA0hat=eig(A0hat) 

eigAlhat=eig(Alhat) 

%break 

ii = l; 

lmiterm([-l 1 1 P],ii,ii) 

lmiterm([-2 1 1 Rl],ii,ii) 

lmiterm([4 1 1 P],l,A0tir;s') 

lmiterm([4 1 1 Rl],ii,ii) 

lmiterm([4 2 2 Rl],-ii,ii) 

lmiterm([4 1 2 P],l,Alhat) 

LMISYS=getlmis; 

[copt,xop t] =f easp (LMISYS); 

P=dec2mat(LMISYS,xopt,P); 

Rl=dec2mat(LMISYS,xopt,Rl); 

evlmi=evallmi(LMISYS,xopt); 

[lhs,rhs] =showlmi(evlmi,4); 

lhs 

P 

eigP=eig(P) 

Rl 

eigRl=eig(Rl) 

eigsLHS=eig(lhs) 

BTP=B'*P 

BTPB=B'*P*B 

invBTPB=inv(B'*P*B) 

gnorm=norm(G) 



A4 

clear; 

clc; 

A0=[2 1; 1.75 0.25 0.8; -1 1] 

Al=[-1 0; -0.1 0.25 0.2; -0.2 4 5] 

B =[0;0;1] 

%break 

hl=1.0; 

setlmis([]); 

P=lmivar(l,[3 1]); 

Geq=inv(B'*P*B)*B'*P 

A0hat=A0-B*Geq*A0 

Alhat=Al-B*Geq*Al 

eigA0hat=eig(A0hat) 

eigAlhat=eig(Alhat) 



Robust Delay-lndependent/Dependent Stabilization of 

Uncertain Time-Delay Systems by Variable Structure Control 1 91 

DesPol = [-2.7 -.8+.5i -.8-.5i]; 

G= place(AOhat,B,DesPol) 

A0til=A0hat-B*G 

eigA0til=eig(A0til) 

Rl=lmivar(l,[3 1]); 

Sl=lmivar(l,[3 1]); 

Tl=lmivar(l,[3 1]); 

lmiterm([-l 1 1 P],l,l); 

lmiterm([-l 2 2 Rl],l,l); 

lmiterm([-2 1 1 Sl],l,l); 

lmiterm([-3 1 1 Tl],l,l); 

lmiterm([4 1 1 P] / (AOtil+Alhat) , / l / , s'); 

lmiterm([4 1 1 Sl],hl,l); 

lmiterm([4 1 1 Rl],hl,l); 

lmiterm([4 1 1 Tl],l,l); 

lmiterm([4 1 2 P],-l,Alhat*AOhat); 

lmiterm([4 1 3 P],-l,Alhat*Alhat); 

lmiterm([4 2 2 Rl],-l/hl,l); 

lmiterm([4 3 3 Sl],-l/hl,l); 

lmiterm([4 4 4 Tl],-l,l); 

LMISYS=getlmis; 

[copt,xop t] =f easp (LMISYS); 

P=dec2mat(LMISYS,xopt,P); 

Rl=dec2mat(LMISYS,xopt,Rl); 

Sl=dec2mat(LMISYS,xopt,Sl); 

Tl=dec2mat(LMISYS,xopt,Tl); 

evlmi=evallmi(LMISYS,xopt); 

[lhs,rhs] =showlmi(evlmi,4); 

lhs,hl,P,Rl,Sl,Tl 

eigsLHS=eig(lhs) 

% repeat 

clc; 

Geq=inv(B , *P*B)*B , *P 

A0hat=A0-B*Geq*A0 

Alhat=Al-B*Geq*Al 

eigA0hat=eig(A0hat) 

eigAlhat=eig(Alhat) 

G= place(AOhat,B,DesPol) 

A0til=A0hat-B*G 

eigA0til=eig(A0til) 

setlmis([]); 

P=lmivar(l,[3 1]); 

Rl=lmivar(l,[3 1]); 

Sl=lmivar(l,[3 1]); 

Tl=lmivar(l,[3 1]); 



192 



Robust Control, Theory and Applications 



lmiterm([-l 1 1 P],l,l); 

lmiterm([-l 2 2 Rl],l,l); 

lmiterm([-2 1 1 Sl],l,l); 

lmiterm([-3 1 1 Tl],l,l); 

lmiterm([4 1 1 P^AOtil+Alhat^l/s'); 

lmiterm([4 1 1 Sl],hl,l); 

lmiterm([4 1 1 Rl],hl,l); 

lmiterm([4 1 1 Tl],l,l); 

lmiterm([4 1 2 P],-l,Alhat*AOhat); 

lmiterm([4 1 3 P],-l,Alhat*Alhat); 

lmiterm([4 2 2 Rl],-l/hl,l); 

lmiterm([4 3 3 Sl],-l/hl,l); 

lmiterm([4 4 4 Tl],-l,l); 

LMISYS=getlmis; 

[copt,xop t] =f easp (LMISYS); 

P=dec2mat(LMISYS,xopt,P); 

Rl=dec2mat(LMISYS,xopt,Rl); 

Sl=dec2mat(LMISYS,xopt,Sl); 

Tl=dec2mat(LMISYS,xopt,Tl); 

evlmi=evallmi(LMISYS,xopt); 

[lhs,rhs] =showlmi(evlmi,4); 

lhs,hl,P,Rl,Sl,Tl 

eigLHS=eig(lhs) 

NormP=norm(P) 

G 

NormG = norm(G) 

invBtPB=inv(B'*P*B) 

BtP=B'*P 

eigP=eig(P) 

eigRl=eig(Rl) 

eigSl=eig(Sl) 

eigTl=eig(Tl) 



A5 

clear; clc; 

A0=[-4 0;-l -3]; 

Al=[-1.5 0;-1 -0.5]; 

B=[2; 2]; 

hl=2.0000; 

setlmis([]); 

P=lmivar(l,[2 1]); 

Geq=inv(B'*P*B)*B , *P 

A0hat=A0-B*Geq*A0 

Alhat=Al-B*Geq*Al 

eigA0hat=eig(A0hat) 



Robust Delay-lndependent/Dependent Stabilization of 

Uncertain Time-Delay Systems by Variable Structure Control 1 93 

eigAlhat=eig(Alhat) 

% DesPol = [~.8+.5i -.8-.5i]; G= place (AOhat,B,DesPol); 

avec = [2 0.1]; 

G = avec; 

A0til=A0hat-B*Gl 

eigA0til=eig(A0til) 

Rl=lmivar(l,[2 1]); 

Sl=lmivar(l,[2 1]); 

Tl=lmivar(l,[21]); 

lmiterm([-l 1 1 P],l,l); 

lmiterm([-l 2 2 Rl],l,l); 

lmiterm([-2 1 1 Sl],l/L); 

lmiterm([-3 1 1 Tl],l,l); 

lmiterm([4 1 1 P^AOtil+Alhat^l/s'); 

lmiterm([4 1 1 Sl],hl,l); 

lmiterm([4 1 1 Rl],hl,l); 

lmiterm([4 1 1 Tl],l,l); 

lmiterm([4 1 2 P],-l,Alhat*A0hat); 

lmiterm([4 1 3 P],-l,Alhat*Alhat); 

lmiterm([4 2 2 Rl],-l/hl,l); 

lmiterm([4 3 3 Sl],-l/hl,l); 

lmiterm([4 4 4 Tl],-l,l); 

LMISYS=getlmis; 

[copt,xop t] =f easp (LMISYS); 

P=dec2mat(LMISYS,xopt,P); 

Rl=dec2mat(LMISYS,xopt,Rl); 

Sl=dec2mat(LMISYS,xopt,Sl); 

Tl=dec2mat(LMISYS,xopt,Tl); 

evlmi=evallmi(LMISYS,xopt); 

[lhs,rhs] =showlmi(evlmi,4); 

lhs,hl,P,Rl,Sl,Tl 

eigsLHS=eig(lhs) 

% repeat 

Geq=inv(B , *P*B)*B , *P 

A0hat=A0-B*Geq*A0 

Alhat=Al-B*Geq*Al 

eigA0hat=eig(A0hat) 

eigAlhat=eig(Alhat) 

G = avec; 

A0til=A0hat-B*G 

eigA0til=eig(A0til) 

setlmis([]); 

P=lmivar(l,[21]); 

Rl=lmivar(l,[2 1]); 

Sl=lmivar(l,[2 1]); 



194 



Robust Control, Theory and Applications 



Tl=lmivar(l,[21]); 

lmiterm([-l 1 1 P],l,l); 

lmiterm([-l 2 2 Rl],l,l); 

lmiterm([-2 1 1 Sl],l,l); 

lmiterm([-3 1 1 Tl],l,l); 

lmiterm([4 1 1 P],(AOtil+Alhat) , ,l, , s'); 

lmiterm([4 1 1 Sl],hl,l); 

lmiterm([4 1 1 Rl],hl,l); 

lmiterm([4 1 1 Tl],l,l); 

lmiterm([4 1 2 P],-l,Alhat*AOhat); 

lmiterm([4 1 3 P],-l,Alhat*Alhat); 

lmiterm([4 2 2 Rl],-l/hl,l); 

lmiterm([4 3 3 Sl],-l/hl,l); 

lmiterm([4 4 4 Tl],-l,l); 

LMISYS=getlmis; 

[copt,xop t] =f easp (LMISYS); 

P=dec2mat(LMISYS,xopt,P); 

Rl=dec2mat(LMISYS,xopt,Rl); 

Sl=dec2mat(LMISYS,xopt,Sl); 

Tl=dec2mat(LMISYS,xopt,Tl); 

evlmi=evallmi(LMISYS,xopt); 

[lhs,rhs] =showlmi(evlmi,4); 

lhs,hl,P,Rl,Sl,Tl 

eigsLHS=eig(lhs) 

NormP=norm(P) 

G 

NormG = norm(G) 

invBtPB=inv(B'*P*B) 

BtP=B'*P 

eigsP=eig(P) 

eigsRl=eig(Rl) 

eigsSl=eig(Sl) 

eigsTl=eig(Tl) 



8. References 

Utkin, V. I. (1977), Variable structure system with sliding modes, IEEE Transactions on 

Automatic Control, Vol. 22, pp. 212-222. 
Sabanovic, A.; Fridman, L. & Spurgeon, S. (Editors) (2004). Variable Structure Systems: from 

Principles to Implementation, The Institution of Electrical Engineering, London. 
Perruquetti, W. & Barbot, J. P. (2002). Sliding Mode Control in Engineering, Marcel Dekker, 

New York. 
Richard J. P. (2003). Time-delay systems: an overview of some recent advances and open 

problems, Automatica, Vol. 39, pp. 1667-1694. 



Robust Delay-lndependent/Dependent Stabilization of 

Uncertain Time-Delay Systems by Variable Structure Control 1 95 

Young, K. K. O.; Utkin, V. I. & Ozgiiner, U. (1999). A control engineer's guide to sliding 

mode control, Transactions on Control Systems Technology, Vol. 7, No. 3, pp. 328-342. 
Spurgeon, S. K. (1991). Choice of discontinuous control component for robust sliding mode 

performance, International Journal of Control, Vol. 53, No. 1, pp. 163-179. 
Choi, H. H. (2002). Variable structure output feedback control design for a class of uncertain 

dynamic systems, Automatica, Vol. 38, pp. 335-341. 
Jafarov, E. M. (2009). Variable Structure Control and Time-Delay Systems, Prof. Nikos 

Mastorakis (Ed.), 330 pages, A Series of Reference Books and Textbooks, WSEAS 

Press, ISBN: 978-960-474-050-5. 
Shyu, K. K. & Yan, J. J. (1993). Robust stability of uncertain time-delay systems and it's 

stabilization by variable structure control, International Journal of Control, Vol. 57, 

pp. 237-246. 
Koshkouei, A. J. & Zinober, A. S. I. (1996). Sliding mode time-delay systems, Proceedings of 

the IEEE International Workshop on Variable Structure Control, pp. 97-101, Tokyo, 

Japan. 
Luo, N.; De La Sen N. L. M. & Rodellar, J. (1997). Robust stabilization of a class of uncertain 

time-delay systems in sliding mode, International Journal of Robust and Nonlinear 

Control, Vol. 7, pp. 59-74. 
Li, X. & De Carlo, R. A. (2003). Robust sliding mode control of uncertain time-delay systems, 

International Journal of Control, Vol. 76, No. 1, pp. 1296-1305. 
Gouisbaut, F.; Dambrine, M. & Richard, J. P. (2002). Robust control of delay systems: a 

sliding mode control design via LMI, Systems and Control Letters, Vol. 46, pp. 219- 

230. 
Fridman, E.; Gouisbaut, F.; Dambrine, M. & Richard, J. P. (2003). Sliding mode control of 

systems with time-varying delays via descriptor approach, International Journal of 

Systems Science, Vol. 34, No. 8-9, pp. 553-559. 
Cao, J.; Zhong, S. & Hu, Y. (2007). Novel delay-dependent stability conditions for a class of 

MIMO networked control systems with nonlinear perturbation, Applied Mathematics 

and Computation, doi: 10.1016/j, pp. 1-13. 
Jafarov, E. M. (2005). Robust sliding mode controllers design techniques for 

stabilization of multivariable time-delay systems with parameter perturbations 

and external disturbances, International Journal of Systems Science, Vol. 36, No. 7, 

pp. 433-444. 
Hung, J. Y.; Gao, & Hung, W. J. C. (1993). Variable structure control: a survey, IEEE 

Transactions on Industrial Electronics, Vol. 40, No. 1, pp. 2 - 22. 
Xu, J.-X.; Hashimoto, H.; Slotine, J.-J. E.; Arai, Y. & Harashima, F. (1989). Implementation of 

VSS control to robotic manipulators-smoothing modification, IEEE Transactions on 

Industrial Electronics, Vol. 36, No. 3, pp. 321-329. 
Tan, S.-C; Lai, Y. M.; Tse, C. K.; Martinez-Salamero, L. & Wu, C.-K. (2007). A fast- 
response sliding-mode controller for boost-type converters with a wide range of 

operating conditions, IEEE Transactions on Industrial Electronics, Vol. 54, No. 6, pp. 

3276-3286. 



1 96 Robust Control, Theory and Applications 

Li, H.; Chen, B.; Zhou, Q. & Su, Y. (2010). New results on delay-dependent robust stability of 
uncertain time delay systems, International Journal of Systems Science, Vol. 41, No. 6, 
pp. 627-634. 

Schmidt, L. V. (1998). Introduction to Aircraft Flight Dynamics, AIAA Education Series, Reston, 
VA. 

Jafarov, E. M. (2008). Robust delay-dependent stabilization of uncertain time-delay 
systems by variable structure control, Proceedings of the International IEEE 
Workshop on Variable Structure Systems VSS'08, pp. 250-255, June 2008, Antalya, 
Turkey. 

Jafarov, E. M. (2009). Robust sliding mode control of multivariable time-delay systems, 
Proceedings of the 11th WSEAS International Conference on Automatic Control, 
Modelling and Simulation, pp. 430-437, May-June 2009, Istanbul, Turkey. 



A Robust Reinforcement Learning System 

Using Concept of Sliding Mode Control for 

Unknown Nonlinear Dynamical System 

Masanao Obayashi, Norihiro Nakahara, Katsumi Yamada, 
Takashi Kuremoto, Kunikazu Kobayashi and Liangbing Feng 

Yamaguchi University 
Japan 



1. Introduction 

In this chapter, a novel control method using a reinforcement learning (RL) (Sutton and 
Barto (1998)) with concept of sliding mode control (SMC) (Slotine and Li (1991)) for 
unknown dynamical system is considered. 

In designing the control system for unknown dynamical system, there are three approaches. 
The first one is the conventional model-based controller design, such as optimal control and 
robust control, each of which is mathematically elegant, however both controller design 
procedures present a major disadvantage posed by the requirement of the knowledge of the 
system dynamics to identify and model it. In such cases, it is usually difficult to model the 
unknown system, especially, the nonlinear dynamical complex system, to make matters 
worse, almost all real systems are such cases. 

The second one is the way to use only the soft-computing, such as neural networks, fuzzy 
systems, evolutionary systems with learning and so on. However, in these cases it is well 
known that modeling and identification procedures for the dynamics of the given uncertain 
nonlinear system and controller design procedures often become time consuming iterative 
approaches during parameter identification and model validation at each step of the 
iteration, and in addition, the control system designed through such troubles does not 
guarantee the stability of the system. 

The last one is the way to use the method combining the above the soft-computing method 
with the model-based control theory, such as optimal control, sliding mode control (SMC), 
H^ control and so on. The control systems designed through such above control theories 
have some advantages, that is, the good nature which its adopted theory has originally, 
robustness, less required iterative learning number which is useful for fragile system 
controller design not allowed a lot of iterative procedure. This chapter concerns with the last 
one, that is, RL system, a kind of soft-computing method, supported with robust control 
theory, especially SMC for uncertain nonlinear systems. 

RL has been extensively developed in the computational intelligence and machine learning 
societies, generally to find optimal control policies for Markovian systems with discrete state 
and action space. RL-based solutions to the continuous-time optimal control problem have 
been given in Doya (Doya (2000). The main advantage of using RL for solving optimal 



198 



Robust Control, Theory and Applications 



control problems comes from the fact that a number of RL algorithms, e.g. Q-learning 
(Watkins et al. (1992)) and actor-critic learning (Wang et al. (2002)) and Obayashi et al. 
(2008)), do not require knowledge or identification/ learning of the system dynamics. On the 
other hand, remarkable characteristics of SMC method are simplicity of its design method, 
good robustness and stability for deviation of control conditions. 

Recently, a few researches as to robust reinforcement learning have been found, e.g., 
Morimoto et al. (2005) and Wang et al. (2002) which are designed to be robust for external 
disturbances by introducing the idea of H*, control theory (Zhau et al. (1996)), and our 
previous work (Obayashi et al. (2009)) is for deviations of the system parameters by 
introducing the idea of sliding mode control commonly used in model-based control. 
However, applying reinforcement learning to a real system has a serious problem, that is, 
many trials are required for learning to design the control system. 

Firstly we introduce an actor-critic method, a kind of RL, to unite with SMC. Through the 
computer simulation for an inverted pendulum control without use of the inverted pendulum 
dynamics, it is clarified the combined method mentioned above enables to learn in less trial of 
learning than the only actor-critic method and has good robustness (Obayashi et al. (2009a)). 
In applying the controller design, another problem exists, that is, incomplete observation 
problem of the state of the system. To solve this problem, some methods have been 
suggested, that is, the way to use observer theory (Luenberger (1984)), state variable filter 
theory (Hang (1976), Obayashi et al. 2009b) and both of the theories (Kung and Chen (2005)). 
Secondly we introduce a robust reinforcement learning system using the concept of SMC, 
which uses neural network- type structure in an actor/ critic configuration, refer to Fig. 1, to 
the case of the system state partly available by considering the variable state filter (Hang 
(1976)). 







Critic 


1 


fc fe 






w 


P(t) 


f W 


*■ JNIoise denerator 






r/fN 




i 


f 




*a\ 








r(t) 








^ 


n(t) 

1 








* 


s 






„ . / 




<f 


w 








^ 


Environment 


1— ► 


Actor - 


u(t) 




W 






v~ 






x(t) 













Fig. 1. The construction of the actor-critic system, (symbols in this figure are reffered to 
section 2) 

The rest of this chapter is organized as follows. In Section 2, the conventional actor-critic 
reinforcement learing system is described. In Section 3, the controlled system, variable filter 
and sliding mode control are shortly explained. The proposed actor-critic reinforcement 
learning system with state variable filter using sliding mode control is described in Section 
4. Comparison between the proposed system and the conventional system through 
simulation experiments is executed in Section 5. Finally, the conclusion is given in Section 6. 



A Robust Reinforcement Learning System Using Concept of 

Sliding Mode Control for Unknown Nonlinear Dynamical System 199 

2. Actor-critic reinforcement learning system 

Reinforcement learning (RL, Sutton and Barto (1998)), as experienced learning through 
trial and error, which is a learning algorithm based on calculation of reward and penalty 
given through mutual action between the agent and environment, and which is 
commonly executed in living things. The actor-critic method is one of representative 
reinforcement learning methods. We adopted it because of its flexibility to deal with both 
continuous and discrete state-action space environment. The structure of the actor-critic 
reinforcement learning system is shown in Fig. 1. The actor plays a role of a controller and 
the critic plays role of an evaluator in control field. Noise plays a part of roles to search 
the optimal action. 

2.1 Structure and learning of critic 
2.1.1 Structure of critic 

The function of the critic is calculation of P(t) : the prediction value of sum of the discounted 

rewards r(t) that will be gotten over the future. Of course, if the value of P(t) becomes 

bigger, the performance of the system becomes better. These are shortly explained as 

follows. 

The sum of the discounted rewards that will be gotten over the future is defined as V(t) . 

00 

1=0 

where /(0</<l)isa constant parameter called discount rate. 
Equation (1) is rewritten as 

V(t) = r(t) + yV(t + l). (2) 

Here the prediction value of V(t) is defined as P(f) . The prediction error r(t) is expressed 
as follows, 

f(t) = f t =r(t) + y P(t + l)-P(t). (3) 

The parameters of the critic are adjusted to reduce this prediction error r (t ) . In our case the 
prediction value ~P(t) is calculated as an output of a radial basis function neural network 
(RBFN) such as, 

p(0=Z«W), ( 4 ) 

7=1 



y c j(t) = exp 



-I(*iW-4) 2 /K-> 



i=l 



(5) 



Here, y c At) :;th node's output of the middle layer of the critic at time t ,co C :\ the weight 
of ;th output of the middle layer of the critic, x i : i th state of the environment at time t, 
c\x and a-: : center and dispersion in the i th input of j th basis function, respectively, / : the 
number of nodes in the middle layer of the critic, n : number of the states of the system (see 
Fig. 2). 



200 



Robust Control, Theory and Applications 



Input layer 



Output layer 




Fig. 2. Structure of the critic. 

2.1.2 Learning of parameters of critic 

Learning of parameters of the critic is done by back propagation method which makes 
prediction error r(t) go to zero. Updating rule of parameters are as follows, 



dco- 



(6) 



Here r/ c is a small positive value of learning coefficient. 

2.2 Structure and learning of actor 
2.2.1 Structure of actor 

Figure 3 shows the structure of the actor. The actor plays the role of controller and outputs 
the control signal, action a(t) , to the environment. The actor basically also consists of radial 
basis function network. The jth basis function of the middle layer node of the actor is as 
follows, 



y°(0 = exp 






7K) 2 



(7) 



«'(0=£"ry/(0' 

7=1 



(8) 



u 1 (t) = u n 



l + exp(-w'(t)) 
l-exp(-w'(0) 



(9) 



u(t) = u 1 (t) + n{t) 



(10) 



Here y a - : ;th node's output of the middle layer of the actor, c| and a^ : center and dispersion 
in zth input of ;th node basis function of the actor, respectively, coj : connection weight 
from ;th node of the middle layer to the output, u(t) : control input, n(t) : additive noise. 



A Robust Reinforcement Learning System Using Concept of 
Sliding Mode Control for Unknown Nonlinear Dynamical System 



201 



Input layer 



Middle layer 



Output layer 



i^(/) *<o 




Fig. 3. Structure of the actor. 

2.2.2 Noise generator 

Noise generator let the output of the actor have the diversity by making use of the noise. It 
comes to realize the learning of the trial and error according to the results of performance of 
the system by executing the decided action. Generation of the noise n(t) is as follows, 



n(t) = n t = noise t • min(l,exp(-P(f )) , 



(ii) 



where noise t is uniformly random number of [-1 , l] , min ( •): minimum of • . As the P(t) 
will be bigger (this means that the action goes close to the optimal action), the noise will be 
smaller. This leads to the stable learning of the actor. 

2.2.3 Learning of parameters of actor 

Parameters of the actor, co a - (j = 1, ••-,]) , are adjusted by using the results of executing the 
output of the actor, i.e. the prediction error r t and noise. 



A(D? 



du^t) 

' dm] 



(12) 



7] a (>0)is the learning coefficient. Equation (12) means that (-n t -r t ) is considered as an 
error, co a - is adjusted as opposite to sign of (-n t • f t ) . In other words, as a result of executing 
u(t) , e.g. if the sign of the additive noise is positive and the sign of the prediction error is 
positive, it means that positive additive noise is sucess, so the value of co] should be 
increased (see Eqs. (8)-(10)), and vice versa. 

3. Controlled system, variable filter and sliding mode control 

3.1 Controlled system 

This paper deals with next nth order nonlinear differential equation. 



» 



= f(x) + b(x)u, 



(13) 



202 



Robust Control, Theory and Applications 



y = x, 



(14) 



where x = [x,x,-",x^ n ~ 1 '] T is state vector of the system. In this paper, it is assumed that a 

part of states, y(= x) , is observable, u is control input, /(x), b(x) are unknown continuous 

functions. 

Object of the control system: To decide control input u which leads the states of the system 

to their targets x. We define the error vector e as follows, 



e = [e,e,---,e 



.(n-lUT 



(n-l)_„ (n-l)iT 



- [x-x d ,x-x d ,--- ,x 
The estimate vector of e, e , is available through the state variable filter (see Fig. 4). 



(15) 



3.2 State variable filter 

Usually it is that not all the state of the system are available for measurement in the real 
system. In this work we only get the state x, that is, e, so we estimate the values of error 
vector e, i.e. e , through the state variable filter, Eq. (16) (Hang (1976) (see Fig. 4). 



<*>n'V 



V +<»n-lV 



-e, (z=0,..-,n-l) 



(16) 




Fig. 4. Internal structure of the state variable filter. 



3.3 Sliding mode control 

Sliding mode control is described as follows. First it restricts states of the system to a sliding 
surface set up in the state space. Then it generates a sliding mode s (see in Eq. (18)) on the 
sliding surface, and then stabilizes the state of the system to a specified point in the state 
space. The feature of sliding mode control is good robustness. 
Sliding time-varying surface H and sliding scalar variable s are defined as follows, 



H:{e|s(e) = 0} 



(17) 



A Robust Reinforcement Learning System Using Concept of 
Sliding Mode Control for Unknown Nonlinear Dynamical System 



203 



s(e) 



T 

a e , 



where a n _ x -\ a = [a ,a 1 ,---,a n _ 1 ] T , and a n _ x ]) n 1 +oc n _ 1 ip n 2 
Hurwitz, p is Laplace transformation variable. 



(18) 
-a is strictly stable in 



4. Actor-critic reinforcement learning system using sliding mode control with 
state variable filter 

In this section, reinforcement learning system using sliding mode control with the state 
variable filter is explained. Target of this method is enhancing robustness which can not be 
obtained by conventional reinforcement. The method is almost same as the conventional 
actor-critic system except using the sliding variable s as the input to it inspite of the system 
states. In this section, we mainly explain the definition of the reward and the noise 
generation method. 



TV 



> Critic* < w 



Prediction reward 



Noise n 



Prediction 
error f 



a\t 



Actor ^ z-l 



control input 



^6 



Controlled system 



output x 



+ 

y - 



Reward T 



sliding scalar variable s 



> 



reference *rf 
Reference error 



State variablefilter 



Fig. 5. Proposed reinforcement learning control system using sliding mode control with 
state variable filter. 

4.1 Reward 

We define the reward r(t) to realize the sliding mode control as follows, 



r(t) = exp{-s(f) 2 



(19) 



here, from Eq. (18) if the actor-critic system learns so that the sliding variable s becomes 
smaller, i.e., error vector e would be close to zero, the reward r(t) would be bigger. 



4.2 Noise 

Noise n(t) is used to maintain diversity of search of the optimal input and to find the 
optimal input. The absolute value of sliding variable s is bigger, n(t) is bigger, and that of s is 
smaller, it is smaller. 



204 



Robust Control, Theory and Applications 



n(t) = z-n -exp -p 



(20) 



where, z is uniform random number of range [-1, 1]. n is upper limit of the perturbation 
signal for searching the optimal input u. p is predefined positive constant for adjusting. 

5. Computer simulation 

5.1 Controlled object 

To verify effectiveness of the proposed method, we carried out the control simulation using 
an inverted pendulum with dynamics described by Eq. (21) (see Fig. 6). 



mg6 = mglsirv0 - ju v 6 + T . 
Parameters in Eq. (21) are described in Table 1. 



(21) 




Fig. 6. An inverted pendulum used in the computer simulation. 



e 


joint angle 


- 


m 


mass 


1.0 [kg] 


I 


length of the pendulum 


1.0 [m] 


8 


gravity 


9.8 [m/sec2] 


Mv 


coefficient of friction 


0.02 


T 


input torque 


- 


X = [0,0] 


observation vector 


- 



Table 1. Parameters of the system used in the computer simulation. 



5.2 Simulation procedure 

Simulation algorithm is as follows, 

Step 1. Initial control input T is given to the system through Eq. (21). 

Step 2. Observe the state of the system. If the end condition is satisfied, then one trial ends, 

otherwise, go to Step 3. 
Step 3. Calculate the error vector e, Eq. (15). If only y(=x) , i.e., e is available, calculate 

e , the estimate value of through the state variable filters, Eq. (16). 



A Robust Reinforcement Learning System Using Concept of 
Sliding Mode Control for Unknown Nonlinear Dynamical System 



205 



Step 4. Calculate the sliding variable s, Eq. (18). 

Step 5. Calculate the reward r by Eq. (19). 

Step 6. Calculate the prediction reward P(t) and the control input u(t) , i.e., torque T by 

Eqs. (4) and (10), respectively. 
Step 7. Renew the parameters a>- ,a>j of the actor and the critic by Eqs. (6) and (12). 
Step 8. Set T in Eq. (21) of the system. Go to Step 2. 

5.3 Simulation conditions 

One trial means that control starts at (0 Q/ ) = (7r/18[rad], [rad / sec] ) and continues the 
system control for 20[sec], and sampling time is 0.02[sec]. The trial ends if \0\ > n / 4 or 
controlling time is over 20[sec]. We set upper limit for output u x of the actor. Trial success 
means that 6 is in range [-tt/3 60,^/3 60] for last 10[sec]. The number of nodes of the 
hidden layer of the critic and the actor are set to 15 by trial and error (see Figs. (2)-( 3)). The 
parameters used in this simulation are shown in Table 2. 



a : sliding variable parameter in Eq. (18) 


5.0 


r] c : learning coefficient of the actor in Eqs. (6)-(A6) 


0.1 


rj a : learning coefficient of the critic in Eqs. (12) -A (7) 


0.1 


U max : Maximun value of the Torque in Eqs. (9)-(A3) 


20 


y : forgetting rate in Eq. (3) 


0.9 



Table 2. Parameters used in the simulation for the proposed system. 

5.4 Simulation results 

Using subsection 5.2, simulation procedure, subsection 5.3, simulation conditions, and the 
proposed method mentioned before, the control simulation of the inverted pendulum Eq. 
(21) are carried out. 

5.4.1 Results of the proposed method 

a. The case of complete observation 

The results of the proposed method in the case of complete observation, that is, 0, are 

available, are shown in Fig. 7. 



0.4 
"0.2 
2 
£-0.2 

-0.4 






HflUfi 


Position — 
Verocity — - 








V 
















I 

















„10- 

i- ° 

■--io! 





Control signal — 


J \ 




1 








If 













5 



10 15 20 5 10 

TIME [sec] TIME [sec] 

(a) 0,0 (b) Torque T q 

Fig. 7. Result of the proposed method in the case of complete observation (0,0 



15 



206 



Robust Control, Theory and Applications 



b. The case of incomplete observation using the state variable filters . 

In the case that only 6 is available, we have to estimate 6 as 6 . Here, we realize it by use 
of the state variable filter (see Eqs. (22)-(23), Fig. 8). By trial and error, the parameters, 
co , co x , co 2 , of it are set to co = 100 , co x - 10 ,co 2 = 50 . The results of the proposed method 
with state variable filter in the case of incomplete observation are shown in Fig. 9. 




Fig. 8. State variable filter in the case of incomplete observation ( e ). 



'o- — 

p + CO x p + COq 



; CO lP 

'1 2 

p + co x p + CO Q 



(22) 
(23) 



0.4 
~0.2 

CO 

-0.4 



r 



Angular Rosition^ 



Angu ar Hosition 
Angular Velocity 



20 
JO 
_ 

■1011 







5 



10 
TIME [sec] 



15 



20 







5 



10 
TIME [sec] 



Control signal —^ 



15 



(a) 0,6 (b) Torque T q 

Fig. 9. Results of the proposed method with the state variable filter in the case of incomplete 
observation (only 6 is available). 

c. The case of incomplete observation using the difference method 

Instead of the state variable filter in 5.4.1 B, to estimate the velocity angle, we adopt the 

commonly used difference method, like that, 



We construct the sliding variable s in Eq. (18) by using 0, 
the proposed method are shown in Fig. 10. 



(24) 
. The results of the simulation of 



A Robust Reinforcement Learning System Using Concept of 
Sliding Mode Control for Unknown Nonlinear Dynamical System 



207 



igular Position — 
par velocity — 







5 



10 
ME [sec] 



15 



JO 
_ 

■10 1 

-20 ; 




i Control signal — 


























I 

















5 



10 
TIME [sec] 



15 



20 



(a) 6,6 (b) Torque T 

Fig. 10. Result of the proposed method using the difference method in the case of incomplete 
observation (only 6 is available). 

5.4.2 Results of the conventional method. 

d. Sliding mode control method 

The control input is given as follows, 



U 



max ' 



u(t) = 

i — 1 1 

max ' 

a = c6 + 6 

^max==20.0[N] 



if 6 -a > 
if • a < 



(25) 



Result of the control is shown in Fig. 11. In this case, angular, velocity angular, and Torque 
are all oscillatory because of the bang-bang control. 




TIME [seel 

(a) 0, 6 




Timefsecl 

(b) Torque T 



Fig. 11. Result of the conventional (SMC) method in the case of complete observation (# ? q ). 

e. Conventional actor-critic method 

The structure of the actor of the conventional actor-critic control method is shown in Fig. 12. 
The detail of the conventional actor-critic method is explained in Appendix. Results of the 
simulation are shown in Fig. 13. 



208 



Robust Control, Theory and Applications 



Input layer 



Mi dale layer 



Output layer 




«!</) «(0 



Fig. 12. Structure of the actor of the conventional actor-critic control method. 



0.4 



0.2, 



Angular Position - 

Angular Velocity 



2 -0.2 



■0.4 



Control signal =3 " 






r- T" W: 



o 



5 



10 
TIME [sec] 



15 



20 







5 



10 
TIME [sec] 



15 



20 



(a) 6, (b) Torque T q 

Fig. 13. Result of the conventional (actor-critic) method in the case of complete observation 
(0,0). 



0.4- 
^ 0.2, 

CD N 

co 

2 0; 



1-0-2 

-0.4 



Angular Position — 



AnguarHositiq 
Angular Velocn 





Control signal — 















































5 



10 
TIME [sec] 



15 



20 







5 



10 
TIME [sec] 



15 



(a) 0, (b) Torque T q 

Fig. 14. Result of the conventional PID control method in the case of complete observation 
{0,6). 



A Robust Reinforcement Learning System Using Concept of 
Sliding Mode Control for Unknown Nonlinear Dynamical System 



209 



/ Conventional PID control method 

The control signal u(t) in the PID control is 



M (0 = -K p e(t) - Kj j%(f) • dt -K d -e(t) , 



(26) 



here, K = 45, Kj = 1, K d - = 10 . Fig. 14 shows the results of the PID control. 



5.4.3 Discussion 

Table 3 shows the control performance, i.e. average error of 6,6, through the controlling 
time when final learning for all the methods the simulations have been done. Comparing 
the proposed method with the conventional actor-critic method, the proposed method is 
better than the conventional one. This means that the performance of the conventional actor- 
critic method hass been improved by making use of the concept of sliding mode control. 



Kinds of 

Average 

error 


Proposed method 


Conventional method 


Actor-Critic 
+ SMC 


SMC 


PID 


Actor- 
Critic 


Complete 
observation 


Incomplete 

Observation 

( 6 : available) 


Complete observation 


S.v.f. 


Difference 


fedt/t 


0.3002 


0.6021 


0.1893 


0.2074 


0.4350 


0.8474 


fOdt/t 


0.4774 


0.4734 


0.4835 


1.4768 


0.4350 


1.2396 



Table 3. Control performance when final learning (S.v.f. : state variable filter, Difference: 
Difference method). 



0.2 

0.15 

0.1 



© 0.05 




-0.05 
-0.1 



l^^» -^-»l,-v+~ ,~+,-«+~ ^U~,~ - ,-.+ - ,-^ ^' 






w 


PID 


X \ i 








^^5==-=^ -~- 


-^- \^_ 











4 6 

Time[sec] 



10 



Fig. 15. Comparison of the porposed method with incomplete observation, the conventional 
actor-critic method and PID method for the angle, 6 . 



210 



Robust Control, Theory and Applications 



Figure 15 shows the comparison of the porposed method with incomplete observation, the 
conventional actor-critic method and PID method for the angle, 6 . In this figure, the 
proposed method and PID method converge to zero smoothly, however the conventional 
actor-critic method does not converge. The comparison of the proposed method with PID 
control, the latter method converges quickly. These results are corresponding to Fig.16, i.e. 
the torque of the PID method converges first, the next one is the proposed method, and the 
conventional one does not converge. 



ii'llil'fiililiiiil'^il A m 




Incomplete state observation using State-filter RL+SMC 

actor-critic RL 
PID 



wwm 



W II 



|ll||||i!|| | i i.'\||')||M Li". | ■!! i'"l' 'll'ill'il '1,11 ,'! I, 







10 



Fig. 16. Comparison of the porposed method with incomplete observation, the conventional 
actor-critic method and PID method for the Torque, T . 



Incomplete state observation using State-filter RL+SMC 
Complete state observation RL+SMC 
Incomplete state observation using Differencial RL+SMC 



0.2 



0.15 



0.1 



0.05 



-0.05 



-0.1 

0.5 1 1.5 2 2.5 

TIME [sec] 

Fig. 17. The comparison of the porposed method among the case of the complete observation, 
the case with the state variable filter, and with the difference method for the angle, 6 . 




A Robust Reinforcement Learning System Using Concept of 
Sliding Mode Control for Unknown Nonlinear Dynamical System 



211 



Fig. 17 shows the comparison of the porposed method among the case of the complete 
observation, the case with the state variable filter, and with the difference method for the 
angle, . Among them, the incomplete state observation with the difference method is best 
of three, especially, better than the complete observation. This reason can be explained by 
Fig. 18. That is, the value of s of the case of the difference method is bigger than that of the 
observation of the velocity angle, this causes that the input gain becomes bigger and the 
convergence speed has been accelerated. 



Sliding using Velocity 
Sliding using Differencial 




10 15 20 

TIME [sec] 

Fig. 18. The values of the sliding variable s for using the velocity and the difference between 
the angle and 1 sampling past angle. 



5.4.4 Verification of the robust performance of each method 

At first, as above mentioned, each controller was designed at m = 1.0 [kg] in Eq. (21). Next 
we examined the range of m in which the inverted pendulum control is success. Success is 
defined as the case that if \0\ <tt/45 through the last l[sec]. Results of the robust 
performance for change of m are shown in Table 4. As to upper/ lower limit of m for 
success, the proposed method is better than the conventional actor-critic method not only 
for gradually changing m smaller from 1.0 to 0.001, but also for changing m bigger from 1.0 
to 2.377. However, the best one is the conventional SMC method, next one is the PID control 
method. 



6. Conclusion 

A robust reinforcement learning method using the concept of the sliding mode control was 
mainly explained. Through the inverted pendulum control simulation, it was verified that 
the robust reinforcement learning method using the concept of the sliding mode control has 
good performance and robustness comparing with the conventional actor-critic method, 
because of the making use of the ability of the SMC method. 



212 



Robust Control, Theory and Applications 



The way to improve the control performance and to clarify the stability of the proposed 
method theoretically has been remained. 





Proposed method 


Conventional method 


Actor-Critic + SMC 


SMC 


PID 


Actor-Critic 


Complete 
observation 


Incomplete 

observ. + 

s.v.f.* 


Complete 
observation 


Complete 
observation 


Complete 
observation 


m-max 
[kg] 


2.081 


2.377 


11.788 


4.806 


1.668 


m-min 
[kg] 


0.001 


0.001 


0.002 


0.003 


0.021 



*(s.v.f.: state variable filter) 

Table 4. Robust control performance for change of m in Eq. (21). 

7. Acknowledgement 

This work has been supported by Japan JSPS-KAKENHI (No.20500207 and No.20500277). 

8. Appendix 

The structure of the critic of the conventional actor-critic control method is shown in Fig. 2. 
The number of nodes of the hidden layer of it is 15 as same as that of the proposed method. 
The prediction reward, P(t), is as follow, 



P(t) = 2>f ■ exp + 



(Al) 



The structure of actor is also similar with critic shown in Fig. 11. The output of the actor, 
u\t) , and the control input, u(t), are as follows, respectively, 






«) 2 



(a \2 



(A2) 



' 6i> 



u l (t) = u n 



1 + exp(-w (0) 

\-GX1p(-U,(t)) 



(A3) 



u(t) = u 1 (t) + n(t). 



(A4) 



The center, c c ei ,c c ^ p c a ei ,c a ^. of the critic and actor of the RBF network are set to equivalent 
distance in the range of -3 < c < 3 . The variance, <t^,<t^ .,0-^,0-^. of the critic and actor of 



A Robust Reinforcement Learning System Using Concept of 

Sliding Mode Control for Unknown Nonlinear Dynamical System 213 

the RBF networks are set to be at equivalent distance in the range of [0 < a < 1] . The values 
mentioned above, particularly, near the original are set to close. The reward r(t) is set as Eq. 
(A5) in order it to maximize at {6,6) = (0,0) , 



r{t) = exp 



(e,f (e t f 



(A5) 



The learning of parameters of critic and actor are carried out through the back-propagation 
algorithm as Eqs. (A6)-(A7) . (r/ c ,r/ a > 0) 

A<=-7 C ~, (/ = 1,-,-/), (A6) 

oco i 

A^=rj a -n r r t -^-, (j = l,-,J). (A7) 

i 

9. References 

K. Doya. (2000). " Reinforcement learning in continuous time and space", Neural 

Computation, 12(1), pp.219-245 
C.C. Hang. (1976). " On state variable filters for adaptive system design", IEEEE Trans. 

Automatic Control Vo.21, No.6,874-876 
C.C. Kung & T.H. Chen. (2005). "Observer-based indirect adaptive fuzzy sliding mode 

control with state variable filters for unknown nonlinear dynamical systems", 

Fuzzy Sets and Systems, Vol.155, pp.292-308 
D.G. Luenberger. (1984). "Linear and Nonlinear Programming", Addison-Wesley Publishing 

Company, MA 
J. Morimoto & K. Doya. (2005) "Robust Reinforcement Learning", Neural Computation 

17,335-359 
M. Obayashi & T. Kuremoto & K. Kobayashi. (2008). "A Self -Organized Fuzzy-Neuro 

Reinforcement Learning System for Continuous State Space for Autonomous 

Robots", Proc. of International Conference on Computational Intelligence for Modeling, 

Control and Automation (CIMCA 2008), 552-559 
M. Obayashi & N. Nakahara & T. Kuremoto & K. Kobayashi. (2009a). "A Robust 

Reinforcement Learning Using Concept of Slide Mode Control", The Journal of the 

Artificial Life and Robotics, Vol. 13, No. 2, pp.526-530 
M. Obayashi & K. Yamada, T. & Kuremoto & K. Kobayashi. (2009b). "A Robust 

Reinforcement Learning Using Sliding Mode Control with State Variable Filters", 

Proceedings of International Automatic Control Conference (CACS 2009),CDROM 
J.J.E. Slotine & W. Li. (1991). "Applied Nonlinear Control ", Prentice-Hall, Englewood Cliffs, 

NJ 
R.S. Sutton & A.G. Barto. (1998). " Reinforcement Learning: An Introduction", The MIT 

Press. 



21 4 Robust Control, Theory and Applications 

W.Y. Wang & M.L. Chan & C.C. James & T.T. Lee. (2002). " H ^ Tracking-Based Sliding 

Mode Control for Uncertain Nonlinear Systems via an Adaptive Fuzzy-Neural 

Approach", IEEE Trans, on Systems, Man, and Cybernetics, Vol.32, No. 4, August, 

pp.483-492 
X. S. Wang & Y. H. Cheng & J. Q. Yi. (2007). "A fuzzy Actor-Critic reinforcement learning 

network", Information Sciences, 177, pp.3764-3781 
C. Watkins & P. Dayan. (1992)."Q-learning," Machine learning, Vol.8, pp.279-292 
K. Zhau & J.C.Doyle & K.Glover. (1996). "Robust optimal control", Englewood Cliffs NJ, 

Prentice Hall 



Part 4 
Selected Trends in Robust Control Theory 



10 



Robust Controller Design: 

New Approaches in the 

Time and the Frequency Domains 

Vojtech Vesely, Danica Rosinova and Alena Kozakova 

Slovak University of Technology 
Slovak Republic 



1. Introduction 



Robust stability and robust control belong to fundamental problems in control theory and 
practice; various approaches have been proposed to cope with uncertainties that always 
appear in real plants as a result of identification /modelling errors, e.g. due to linearization 
and approximation, etc. A control system is robust if it is insensitive to differences between 
the actual plant and its model used to design the controller. To deal with an uncertain plant 
a suitable uncertainty model is to be selected and instead of a single model, behaviour of a 
whole class of models is to be considered. Robust control theory provides analysis and 
design approaches based upon an incomplete description of the controlled process 
applicable in the areas of non-linear and time-varying processes, including multi input - 
multi output (MIMO) dynamic systems. 

MIMO systems usually arise as interconnection of a finite number of subsystems, and in 
general, multivariable centralized controllers are used to control them. However, practical 
reasons often make restrictions on controller structure necessary or reasonable. In an 
extreme case, the controller is split into several local feedbacks and becomes a decentralized 
controller. Compared to centralized full-controller systems such a control structure brings 
about certain performance deterioration; however, this drawback is weighted against 
important benefits, e.g. hardware, operation and design simplicity, and reliability 
improvement. Robust approach is one of useful ways to address the decentralized control 
problem (Boyd et al, 1994; Henrion et al, 2002; de Oliveira et al, 1999; Gyurkovics & 
Takacs, 2000; Ming Ge et al, 2002; Skogestad & Postlethwaite, 2005; Kozakova and Vesely, 
2008; Kozakova et al, 2009a) . 

In this chapter two robust controller design approaches are presented: in the time domain 
the approach based on Linear (Bilinear) matrix inequality (LMI, BMI), and in the frequency 
domain the recently developed Equivalent Subsystem Method (ESM) (Kozakova et al., 
2009b). As proportional-integral-derivative (PID) controllers are the most widely used in 
industrial control systems, this chapter focuses on the time- and frequency domain PID 
controller design techniques resulting from both approaches. 

The development of Linear Matrix Inequality (LMI) computational techniques has provided 
an efficient tool to solve a large set of convex problems in polynomial time (e.g. Boyd et al., 
1994). Significant effort has been therefore made to formulate crucial control problems in 



21 8 Robust Control, Theory and Applications 

algebraic way (e.g. Skelton et al., 1998), so that the numerical LMI solution can be employed. 
This approach is advantageously used in solving control problems for linear systems with 
convex (affine or polytopic) uncertainty domain. However, many important problems in 
linear control design, such as decentralized control, simultaneous static output feedback 
(SOF) or more generally - structured linear control problems have been proven as NP hard 
(Blondel & Tsitsiklis, 1997). Though there exist solvers for bilinear matrix inequalities (BMI), 
suitable to solve e.g. SOF, they are numerically demanding and restricted to problems of 
small dimensions. Intensive research has been devoted to overcome nonconvexity and 
transform the nonconvex or NP-hard problem into convex optimisation problem in LMI 
framework. Various techniques have been developed using inner or outer convex 
approximations of the respective nonconvex domains. The common tool in both inner and 
outer approximation is the use of linearization or convexification. In (Han & Skelton, 2003; 
de Oliveira et al., 1999), the general convexifying algorithm for the nonconvex function 
together with potential convexifying functions for both continuous and discrete-time case 
have been proposed. Linearization approach for continuous and discrete-time system design 
was independently used in (Rosinova & Vesely, 2003; Vesely, 2003). 

When designing a (PID) controller, the derivative part of the controller causes difficulties 
when uncertainties are considered. In multivariable PID control schemes using LMI 
developed recently (Zheng et al., 2002), the incorporation of the derivative part requires 
inversion of the respective matrix, which does not allow including uncertainties. Another 
way to cope with the derivative part is to assume the special case when output and its 
derivative are state variables, robust PID controller for first and second order SISO systems 
are proposed for this case in (Ming Ge et al., 2002). 

In Section 2, the state space approach to the design of (decentralized or multi-loop) PID 
robust controllers is proposed for linear uncertain system with guaranteed cost using a new 
quadratic cost function. The major contribution is in considering the derivative part in 
robust control framework. The resulting matrix inequality can be solved either using BMI 
solver, or using linearization approach and following LMI solution. 

The frequency domain design techniques have probably been the most popular among the 
practitioners due to their insightfulness and link to the classical control theory. In 
combination with the robust approach they provide a powerful engineering tool for control 
system analysis and synthesis. An important field of their implementation is control of 
MIMO systems, in particular the decentralized control (DC) due to simplicity of hardware 
and information processing algorithms. The DC design proceeds in two main steps: 1) 
selection of a suitable control configuration (pairing inputs with outputs); 2) design of local 
controllers for individual subsystems. There are two main approaches applicable in Step 2: 
sequential (dependent) design, and independent design. When using sequential design local 
controllers are designed sequentially as a series controller, hence information about " lower 
level" controllers is directly used as more loops are closed. Main drawbacks are lack of 
failure tolerance when lower level controllers fail, strong dependence of performance on the 
loop closing order, and a trial-and-error design process. 

According to the independent design, local controllers are designed to provide stability of 
each individual loop without considering interactions with other subsystems. The effect of 
interactions is assessed and transformed into bounds for individual designs to guarantee 
stability and a desired performance of the full system. Main advantages are direct design of 
local controllers with no need for trial and error; the limitation consists in that information 



Robust Controller Design: New Approaches in the Time and the Frequency Domains 21 9 

about controllers in other loops is not exploited, therefore obtained stability and 
performance conditions are only sufficient and thus potentially conservative. 
Section 3 presents a frequency domain robust decentralized controller design technique 
applicable for uncertain systems described by a set of transfer function matrices. The core of 
the technique is the Equivalent Subsystems Method - a Nyquist-based DC design method 
guaranteeing performance of the full system (Kozakova et al., 2009a; 2009b). To guarantee 
specified performance (including stability), the effect of interactions is assessed using a 
selected characteristic locus of the matrix of interactions further used to reshape frequency 
responses of decoupled subsystems thus generating so-called equivalent subsystems. Local 
controllers of equivalent subsystems independently tuned to guarantee specified 
performance measure value in each of them constitute the decentralized (diagonal) 
controller; when applied to real subsystems, the resulting controller guarantees the same 
performance measure value for the full system. To guarantee robust stability over the 
specified operating range of the plant, the M-A stability conditions are used (Skogestad & 
Postlethwaite, 2005; Kozakova et al., 2009a, 2009b). Two versions of the robust DC design 
methodology have been developed: a the two-stage version (Kozakova & Vesely, 2009; 
Kozakova et al. 2009a), where robust stability is achieved by additional redesign of the DC 
parameters; in the direct version, robust stability conditions are integrated in the design of 
local controllers for equivalent subsystems. Unlike standard robust approaches, the 
proposed technique allows considering full nominal model thus reducing conservatism of 
robust stability conditions. Further conservatism relaxing is achieved if the additive affine 
type uncertainty description and the related M a f- Q stability conditions are used (Kozakova 
& Vesely, 2007; 2008). 

In the sequel, X > denotes positive definite matrix; * in matrices denotes the respective 
transposed term to make the matrix symmetric; I denotes identity matrix and denotes zero 
matrix of the respective dimensions. 

2. Robust PID controller design in the time domain 

In this section the PID control problem formulation via LMI is presented that is appropriate 
for poly topic uncertain systems. Robust PID control scheme is then proposed for structured 
control gain matrix, thus enabling decentralized PID control design. 

2.1 Problem formulation and preliminaries 

Consider the class of linear affine uncertain time-invariant systems described as: 

Sx(t) = (A + SA)x(t) + (B + SB)u(t) 

y(t) = Cx(t) (1) 

where 

Sx(t) = x(t) for continuous-time system 
Sx(t ) = x(t + 1) for discrete-time system 

x(t)eR n ,u(t)eR m ,y(t)eR are state, control and output vectors respectively; A, B, C are 
known constant matrices of the respective dimensions corresponding to the nominal system, 
SA,SB are matrices of uncertainties of the respective dimensions. The affine uncertainties 
are assumed 



220 Robust Control, Theory and Applications 

SA{t) = f jYj A j ,SB{t) = f j y j B j (2) 

7=1 ;=i 

where /.</•</• are unknown uncertainty parameters; A ,B ,; = 1,2,...,/? are constant 
matrices of uncertainties of the respective dimensions and structure. The uncertainty 
domain for a system described in (1), (2) can be equivalently described by a polytopic model 
given by its vertices 

{(A^CUA^C) (A N ,B N ,C)} , N = 2? (3) 

The (decentralized) feedback control law is considered in the form 

u(t) = FCx(t) (4) 

where F is an output feedback gain matrix. The uncertain closed-loop polytopic system is 
then 

Sx(t) = A c (a)x(t) (5) 

where 

( N N 1 

A c (a)e X^ A Cf Z a i =1 ' a i^°\> ( 6 ) 

A Cl =A l+ B l FC. 

To assess the performance, a quadratic cost function known from LQ theory is frequently 
used. In practice, the response rate or overshoot are often limited, therefore we include the 
additional derivative term for state variable into the cost function to damp the oscillations 
and limit the response rate. 

oo 

J c = j[x(t) T Qx(t) + u(t) T Ru(t) + Sx(tf SSx(t)]dt for a continuous-time and (7) 



Jd = Z W0 T Q x (0 + u(t) T Ru(t) + Sx(t) T SSx(t)] for a discrete-time system (8) 

fc=0 

where Q,S eR nxn ,ReR mxm are symmetric positive definite matrices. The concept of 
guaranteed cost control is used in a standard way: let there exist a feedback gain matrix Fo and 
a constant Jo such that 

7^/o (9) 

holds for the closed loop system (5), (6). Then the respective control (4) is called the 

guaranteed cost control and the value of Jo is the guaranteed cost. 

The main aim of Section 2 of this chapter is to solve the next problem. 

Problem 2.1 

Find a (decentralized) robust PID control design algorithm that stabilizes the uncertain 

system (1) with guaranteed cost with respect to the cost function (7) or (8). 



Robust Controller Design: New Approaches in the Time and the Frequency Domains 



221 



We start with basic notions concerning Lyapunov stability and convexifying functions. In 

the following we use D-stability concept (Henrion et aL, 2002) to receive the respective 

stability conditions in more general form. 

Definition 2.1 (D-stability) 

Consider the D-domain in the complex plain defined as 



D = {s is complex number : 



"1" 

s 




hi hi 

hi r n_ 




T 

s 



<0} 



The considered linear system (1) is D-stable if and only if all its poles lie in the D-domain. 

(For simplicity, we use in Def. 2.1 scalar values of parameters rij, in general the stability 

domain can be defined using matrix values of parameters r^ with the respective 

dimensions.) The standard choice of r^ is rn = 0, ri2 = 1, r22 = for a continuous-time system; 

rn = -1, ri2 = 0, r22 = 1 for a discrete-time system, corresponding to open left half plane and 

unit circle respectively. 

The quadratic D-stability of uncertain system is equivalent to the existence of one Lyapunov 

function for the whole uncertainty set. 

Definition 2.2 (Quadratic D-stability) 

The uncertain system (5) is quadratically D-stable if and only if there exists a symmetric 

positive definite matrix P such that 



r 12 PA c (a) + h 2 A[(a)P + r n P + r 22 A T c (a)PA c (a) < 



(10) 



To obtain less conservative results than using quadratic stability, a robust stability notion is 
considered based on the parameter dependent Lyapunov function (PDLF) defined as 



P(a) = ^afi where P { = Pj > 



(11) 



Definition 2.3 (deOliveira et aL, 1999) 

System (5) is robustly D-stable in the convex uncertainty domain (6) with parameter- 
dependent Lyapunov function (11) if and only if there exists a matrix P(a) = P(a) T > such 
that 



r 12 P(a)A c (a) + r* 12 A T c {a)P{a) + r n P(a)+ r 22 A£(tf)P(a)A c (a) < 



(12) 



for all a such that A c (a) is given by (6). 

Now recall the sufficient robust D-stability condition proposed in (Peaucelle et aL, 2000), 

proven as not too conservative (Grman et aL, 2005). 

Lemma 2.1 

If there exist matrices E e R nxn , G e R nxn and N symmetric positive definite matrices 

P { e R nxn such that for all i = 1, . . ., N: 



r„P+Ar-E T +EA t 



a h^-E + A^G 



rl 2 P l -E T + G T A Cl r 22 ^.-(G + G T ) 



<0 



(13) 



then uncertain system (5) is robustly D-stable. 



222 Robust Control, Theory and Applications 

Note that matrices E and G are not restricted to any special form; they were included to 
relax the conservatism of the sufficient condition. To transform nonconvex problem of 
structured control (e.g. output feedback, or decentralized control) into convex form, the 
convexifying (linearizing) function can be used (Han&Skelton, 2003; deOliveira et al., 2000; 
Rosinova&Vesely, 2003; Vesely, 2003). The respective potential convexifying function for 
X -1 and XPVX has been proposed in the linearizing form: 
the linearization of X -1 e R nxn about the value X k > is 

0(X~\X k ) = XI 1 -Xl\X-X k )Xt (14) 

the linearization of XWX e R nxn about X k is 

^(XWX, X k ) = -X k WX k + XWX k + X k WX (15) 

Both functions defined in (14) and (15) meet one of the basic requirements on convexifying 
function: to be equal to the original nonconvex term if and only if Xk = X. However, the 
question how to choose the appropriate nice convexifying function remains still open. 

2.2 Robust optimal controller design 

In this section the new design algorithm for optimal control with guaranteed cost is 

developed using parameter dependent Lyapunov function and convexifying approach 

employing iterative procedure. The proposed control design approach is based on sufficient 

stability condition from Lemma 2.1. The next theorem provides the new form of robust 

stability condition for linear uncertain system with guaranteed cost. 

Theorem 2.1 

Consider uncertain linear system (1), (2) with static output feedback (4) and cost function (7) 

or (8). The following statements are equivalent: 

i. Closed loop uncertain system (5) is robustly D-stable with PDLF (11) and guaranteed 

cost with respect to cost function (7) or (8): / < J = x T (0)P(a)x(0) . 
ii. There exist matrices P(a) > defined by (11) such that 

r 12 P(a)A c (a) + ri 2 A T c {a)P{a) + r n P(a) + r 22 A T c (a)P(a)A c (a) + 
+Q + C T F T RFC + A T c (a)SA c (a) < 

iii. There exist matrices P(a) > defined by (11) and matrices H, G and F of the respective 
dimensions such that 



r n P(a) + A T ci (a)H T + HA ci (a) + Q + C T F T RFC 

ri 2 P(a) - H T + G T A ci (a) r 22 P(a) -(G + G T ) + S 



< (17) 



A ci = (A f + BiFC) denotes the i-th closed loop system vertex. Matrix F is the guaranteed cost 
control gain for the uncertain system (5), (6). 

Proof. For brevity the detail steps of the proof are omitted where standard tools are applied, 
(i) <^> (ii): the proof is analogous to that in (Rosinova, Vesely, Kucera, 2003). The (ii) =>(i) is 
shown by taking V(t) = x(t)P(a)x(t) as a candidate Lyapunov function for (5) and writing 
SV(t) , where 



Robust Controller Design: New Approaches in the Time and the Frequency Domains 



223 



SV(t) = V(t) for continuous-time system 

SV(t) = V(t + 1) - V(t) for discrete-time system 

SV (t) = r* 2 Sx(t) T P{a)x{t) + r 12 x(t) T P(a)Sx(t) + r n x(t) T P{a)x{t) + r 22 Sx(t) T P{a)Sx{t) (18) 

Substituting for Sx from (5) to (18) and comparing with (16) provides D -stability of the 
considered system when the latter inequality holds. The guaranteed cost can be proved by 
summing or integrating both sides of the following inequality for t from to go: 

SV(t) < -x(t) T [Q + C T F T RFC + A T c (a)SA c (a)]x(t) 

The (i) =>(ii) can be proved by contradiction. 

(ii) <^> (iii): The proof follows the same steps to the proof of Lemma 2.1: (iii) =>(ii) is proved 
in standard way multiplying both sides of (17) by the full rank matrix (equivalent 
transformation) : 



[i A T c (a)]{lh.s.(17)} 



I 
A c (a) 



<o. 



(ii) ^>(iii) follows from applying a Schur complement to (16) rewritten as 

r 12 P(a)A c (a) + r* 12 A T c (a)P(a) + Q + C T F T RFC + r n P(a) + A T C (a)[r 22 P(a) + S]A C (a) < 



Therefore 



X n X 12 

x 12 x 22 ^ 



: where 



X n = r n P(a) + r 12 P(a)A c (a) + rl 2 A T c (a)P(a) + Q + C T F T RFC 
X 12 =A T c (a)[r 22 P(a) + S] 
X 22 =-[r 22 P(a) + S] 

which for H = r 12 P(a), G= [r 22 P(a) + S] gives (17). 

The proposed guaranteed cost control design is based on the robust stability condition (17). 
Since the matrix inequality (17) is not LMI when both P(a) and F are to be found, we use 
the inner approximation for the continuous time system applying linearization formula (15) 
together with using the respective quadratic forms to obtain LMI formulation, which is then 
solved by iterative procedure. 

2.3 PID robust controller design for continuous-time systems 

Control algorithm for PID is considered as 



u(t) = K p y(t) + Kjjy(t)dt + F d C d x(t) (19) 



The proportional and integral term can be included into the state vector in the common way 

t 
defining the auxiliary state z = \y(t) , i.e. z(t) = y(t) = Cx(t) . Then the closed-loop system for 

o 
PI part of the controller is 



224 



Robust Control, Theory and Applications 



A + SA 0] 


X 




\B + SBl 


C 


z 


+ 






U (t) and u(t) = FCx(t) + F d C d x(t) 



(20) 



where FCx(t) and F d C d x(t) correspond respectively to the PI and D term of PID controller. 
The resulting closed loop system with PID controller (19) is then 



x n (t) = A c (a)x n (t) + B(a)[F d C d 0]*„(t) 



(21) 



where the PI controller term is included in A c (a) . (For brevity we omit the argument t) To 
simplify the denotation, in the following we consider PD controller (which is equivalent to 
the assumption, that the I term of PID controller has been already included into the system 
dynamics in the above outlined way) and the closed loop is described by 



x(t) = A c (a)x(t) + B(a)F d C d x(t) 
Let us consider the following performance index 



7. = J[* *f 



Q + C T F T RFC 
S 



dt 



(22) 



(23) 



which formally corresponds to (7). Then for Lyapunov function (11) we have the necessary 
and sufficient condition for robust stability with guaranteed cost in the form (16), which for 
continuous time system can be rewritten as: 



[x x? 



Q + C T F T RFC P(a) 



<0 



(24) 



P(a) S 

The main result on robust PID control stabilization is summarized in the next theorem. 

Theorem 2.2 

Consider a continuous uncertain linear system (1), (2) with PID controller (19) and cost 

function (23). The following statements are equivalent: 

i Closed loop system (21) is robustly D-stable with PDLF (11) and guaranteed cost with 

respect to cost function (23): J<J = x T (0)P(a)x(0) . 
ii There exist matrices P(a) > defined by (11), and H, G, F and Fd of the respective 

dimensions such that 



A T Cj H T + HA ci +Q + C T F T RFC 
P-MlH + G T A r ; 



-M T di G-G T M dl+ S_ 



<0 



(25) 



A ci = (A t + BiFC) denotes the i-th closed loop system vertex, Mdi includes the derivative part 



of the PID controller: M di =1- B i F d C d . 



Proof. Owing to (22) for any matrices H and G: 

{-x T H - x T G T )(x-A c (a)x - B(a)F d C d x) + 
+(x - A c (a)x - B(a)F d C d x) T (h t x - G*) = 



(26) 



Robust Controller Design: New Approaches in the Time and the Frequency Domains 



225 



Summing up the l.h.s of (26) and (24) and taking into consideration linearity w.r.t. a we get 
condition (25). 

Theorem 2.2 provides the robust stability condition for the linear uncertain system with PID 
controller. Notice that the derivative term does not appear in the matrix inversion and 
allows including the uncertainty in control matrix B into the stability condition. 
Considering PID control design, there are unknown matrices H, G, F and Fd to be solved 
from (25). (Recall that A Ci = (A t + B t FC) , M dl ■= I - B t F d C d .) Then, inequality (25) is bilinear 
with respect to unknown matrices and can be solved either by BMI solver, or by 
linearization approach using (15) to cope with the respective unknown matrices products. 
For the latter case the PID iterative control design algorithm based on LMI (4x4 matrix) has 
been proposed. The resulting closed loop system with PD controller is 



m = (I- W,,)" 1 (A + B,FC)x(t) , i=l N 



(27) 



The extension of the proposed algorithm to decentralized control design is straightforward 
since the respective F and Fd matrices are assumed as being of the prescribed structure, 
therefore it is enough to prescribe the decentralized structure for both matrices. 

2.4 PID robust controller design for discrete-time systems 

Control algorithm for discrete-time PID (often denoted as PSD controller) is considered as 



u(k) = k P e(k) + k^e(k) + k D [e(k) - e(k - 1)] 

z=0 



(28) 



control error e(k) = w- y(k) ; discrete time being denoted for clarity as k instead of t. PSD 
description in state space: 



z(k + l) = 



1 
1 



m- 



e(k) = A R z(k) + B R e(k) 



(29) 



u(k) = [k D kj - k D ]z(k) + (k P + k l+ k D )e(k) 
Combining (1) for t^k and (29) the augmented closed loop system is received as 





x(k + l) 




A + SA 


x(k) 




B + SB 




z(k + l) 




_ -b r c a r _ 


lm\ 







K 2 =(l 


: P + k T +l 


d) 


K 1= [k D k r 


-*d]- 







[-K 2 K,] 



C 

I 



x(kj 
z(k)_ 



(30) 



Note that there is a significant difference between PID (19) and PSD (28) control design 
problem: for continuous time PID structure results in closed loop system that is not strictly 
proper which complicates the controller design, while for discrete time PSD structure, the 
control design is formulated as static output feedback (SOF) problem therefore the 
respective techniques to SOF design can be applied. 

In this section an algorithm for PSD controller design is proposed. Theorem 2.1 provides the 
robust stability condition for the linear time varying uncertain system, where a constrained 
control structure can be assumed: considering A Ci = (A z + B Z FC) we have SOF problem 
formulation which is also the case of discrete time PSD control structure for 



226 



Robust Control, Theory and Applications 



F = [-(fc p +fcj+fc D ) k E 



-k D ] (see (30)); (taking block diagonal structure of feedback 



matrix gain F provides decentralized controller). Inequality (17) is LMI for stability analysis 

for unknown H, G and Pi, however considering control design, having one more unknown 

matrix F in A Ci = (A z + B Z FC) , the inequality (17) is no more LMI. Then, to cope with the 

respective unknown matrix products the inner approximation approach can be used, when 

the resulting LMI is sufficient for the original one to hold. 

The next robust output feedback design method is based on (17) using additional constraint 

on output feedback matrix and the state feedback control design approach proposed 

respectively in (Crusius and Trofino, 1999; deOliveira et al., 1999). For stabilizing PSD 

control design (without considering cost function) we have the following algorithm (taking 

H=0, Q=0, R=0, S=0). 

PSD controller design algorithm 

Solve the following LMI for unknown matrices F, M, G and Pi of appropriate dimensions, 

the Pi being symmetric, positive definite, M, G being any matrices with corresponding 

dimensions: 



g t a[ + c t k t bJ 



^G + B^C 
-G-G T + P + S 



<0 



(31) 



3>0, i = l,...,N 
MC = CG 

Compute the corresponding output feedback gain matrix 



(32) 



F = KM~ 



(33) 



where F = [-(fc K + k u +k Di ) k c 



■k Di ] 



The algorithm above is quite simple and often provides reasonable results. 

2.5 Examples 

In this subsection the major contribution of the proposed approach: design of robust 
controller with derivative feedback is illustrated on the examples. The results obtained using 
the proposed new iterative algorithm based on (25) to design the PD controller are provided 
and discussed. The impact of matrix S choice is studied as well. We consider affine models 
of uncertain system (1), (2) with symmetric uncertainty domain: 



-q,Sj=q 



Example 2.1 

Consider the uncertain system (1), (2) where 



-4.365 -0.6723 -0.3363 
7.0880 -6.5570 -4.6010 
-2.4100 7.5840 -14.3100 



2.3740 0.7485 
1.3660 3.4440 
0.9461 -9.6190 



C = C, 



1 




uncertainty parameter q=l; uncertainty matrices 



Robust Controller Design: New Approaches in the Time and the Frequency Domains 



227 





-0.5608 


0.8553 


0.5892 




2.3740 0.7485 


A = 


0.6698 


-1.3750 


-0.9909 


Bi = 


1.3660 3.4440 




3.1917 


1.7971 


-2.5887 




0.9461 -9.6190 




" 0.6698 


-1.3750 


-0.9909" 




" 0.1562 0.1306 


A 2 = 


-2.8963 


-1.5292 


10.5160 


B 2 = 


-0.4958 4.0379 




-3.5777 


2.8389 


1.9087 




-0.0306 0.8947 



The uncertain system can be described by 4 vertices; corresponding maximal eigenvalues in 
the vertices of open loop system are respectively: -4.0896 ± 2.1956i; -3.9243; 1.5014; -4.9595. 
Notice, that the open loop uncertain system is unstable (positive eigenvalue in the third 
vertex). The stabilizing optimal PD controller has been designed by solving matrix 
inequality (25). Optimality is considered in the sense of guaranteed cost w.r.t. cost function 
(23) with matrices R = I 2x2 , Q = 0.001 * I 3x3 . The results summarized in Tab.2.1 indicate the 
differences between results obtained for different choice of cost matrix S respective to a 
derivative of x. 



s 


Controller matrices 
F (proportional part) 
Fd (derivative part) 


Max eigenvalues in 
vertices 


le-6 *I 


F = 


"-1.0567 -0.5643" 
-2.1825 -1.4969 
"-0.3126 -0.2243" 
-0.0967 0.0330 




-4.8644 
-2.4074 

-3.8368 ± 1.1165 i 
-4.7436 


0.1*1 


F = 


"-1.0724 -0.5818" 

-2.1941 -1.4642 

"-0.3227 -0.2186" 

-0.0969 0.0340 




-4.9546 
-2.2211 

-3.7823 ± 1.4723 i 
-4.7751 



Table 2.1 PD controllers from Example 2.1. 

Example 2.2 

Consider the uncertain system (1), (2) where 



A = 



-2.9800 


0.9300 





-0.9900 


-0.2100 


0.0350 











0.3900 


-5.5550 






-0.0340" 




"-0.0320" 




-0.0011 
1 


B = 






C = 


-1.8900 




-1.6000 





10" 
1 



1.5 0" 




"0" 






B = 















228 



Robust Control, Theory and Applications 



The results are summarized in Tab.2.2 for R = 1, Q = 0.0005 * I 4x4 for various values of cost 
function matrix S. As indicated in Tab.2.2, increasing values of S slow down the response as 
assumed (max. eigenvalue of closed loop system is shifted to zero). 



s 


Ojmax 


Max. eigenvalue of closed loop system 


le-8 *I 


1.1 


-0.1890 


0.1*1 


1.1 


-0.1101 


0.2*1 


1.1 


-0.0863 


0.29 *I 


1.02 


-0.0590 



Table 2.2 Comparison of closed loop eigenvalues (Example 2.2) for various S. 
3. Robust PID controller design in the frequency domain 

In this section an original frequency domain robust control design methodology is presented 
applicable for uncertain systems described by a set of transfer function matrices. A two- 
stage as well as a direct design procedures were developed, both being based on the 
Equivalent Subsystems Method - a Nyquist-based decentralized controller design method 
for stability and guaranteed performance (Kozakova et al., 2009a;2009b), and stability 
conditions for the M-A structure (Skogestad & Postlethwaite, 2005; Kozakova et al., 2009a, 
2009b). Using the additive affine type uncertainty and related M a f-Q structure stability 
conditions, it is possible to relax conservatism of the M-A stability conditions (Kozakova & 
Vesely, 2007). 



3.1 Preliminaries and problem formulation 

Consider a MIMO system described by a transfer function matrix G(s) e R mxm f and a 
controller R(s) e R mxm m the standard feedback configuration (Fig. 1); w, u, y, e, d are 
respectively vectors of reference, control, output, control error and disturbance of 
compatible dimensions. Necessary and sufficient conditions for internal stability of the 
closed-loop in Fig. 1 are given by the Generalized Nyquist Stability Theorem applied to the 
closed-loop characteristic polynomial 



detF(s) = det[I + Q(s)] 
where Q(s) = G(s)R(s) e R mxm [ s the open-loop transfer function matrix. 



(34) 



w . 




R(s) 



*0^+ G(s) 



JU 



Fig. 1. Standard feedback configuration 

The following standard notation is used: D - the standard Nyquist D-contour in the complex 
plane; Nyquist plot of g(s) - the image of the Nyquist contour under g(s);N[k,g(s)] - the 
number of anticlockwise encirclements of the point (k, jO) by the Nyquist plot of g(s). 
Characteristic functions of Q(s) are the set of m algebraic functions q i (s),i = l,...,m given as 



Robust Controller Design: New Approaches in the Time and the Frequency Domains 229 

de%(s)I m - Q(s)] = i = l,...,m (35) 

Characteristic loci (CL) are the set of loci in the complex plane traced out by the 
characteristic functions of Q(s), \/seD . The closed-loop characteristic polynomial (34) 
expressed in terms of characteristic functions of Q(s) reads as follows 

m 

detF(s) = det[7 + Q(s)] = l\[l + cj,(s)] (36) 

Theorem 3.1 (Generalized Nyquist Stability Theorem) 
The closed-loop system in Fig. 1 is stable if and only if 



a. detF(s)*0 VsgD 



b. N[0,detF(s)] = 2>{0,[1 + qi (s)]} = n q 

i=l 



(37) 



where F(s) = (I + Q(s)) and n q is the number of unstable poles of Q(s). 
Let the uncertain plant be given as a set IJoiN transfer function matrices 

77 = {G fc (s)}, k = 1,2,..., N where G k (s) = {gUs)} (38) 

The simplest uncertainty model is the unstructured uncertainty, i.e. a full complex 
perturbation matrix with the same dimensions as the plant. The set of unstructured 
perturbations Du is defined as follows 

D u := {E(jcd) : cr max [E(^)] < l(a>), t{a>) = max cr max [E(^)]} (39) 

k 

where £{co) is a scalar weight function on the norm-bounded perturbation A(s)<=R mxm , 
cr max [zl(7^)]<l over given frequency range, 0" max (-) is the maximum singular value of (.), 
i.e. 

E(jco) = £(cd)A(jcd) (40) 

For unstructured uncertainty, the set 77 can be generated by either additive (E a ), 
multiplicative input (E,-) or output (E ) uncertainties, or their inverse counterparts (E za , En, 
Efo), the latter used for uncertainty associated with plant poles located in the closed right 
half-plane (Skogestad & Postlethwaite, 2005). 

Denote G(s) any member of a set of possible plants I7 k ,k = a,i,o,ia,ii,io; G (s) the nominal 
model used to design the controller, and ^{co) the scalar weight on a normalized 
perturbation. Individual uncertainty forms generate the following related sets IJ k : 
Additive uncertainty: 



n a := (G(s) : G(s) = G (s) + E a (s), E a {jm) < t a (a,)^ja,)\ 
/ fl H = max<T max [G S: (H-G (H], k = l,2,...,N 

k 



(41) 



230 



Robust Control, Theory and Applications 



Multiplicative input uncertainty: 

77, := {G(s) : G(s) = G (s)[l + E,(s)], £,(;«.) < <,(^K|ffl)) 
/ > H = max<T max {G - 1 0«)[G t O ft >)-G (/ ft >)]}, * = 1,2,...,N 

Multiplicative output uncertainty: 

n := {G(s) : G(s) = [I + E o (s)]G (s), E Q (jm) < ( (jw)A(jw)} 
l (m) = max a m3X {[G k (j a )-G (j a )]G- \j a )}, k = l,2,...,N 

k 

Inverse additive uncertainty 

n ia := {G(s) : G(s) = G (s)[I -E, fl ( S )G (;«)]- 1 , £,,(/«) < l ia (co)A(ja>)} 
l m (w) = max ff^flGoOffljr 1 -^O)]" 1 ), * = 1,2 N 

k 

Inverse multiplicative input uncertainty 

77„ := {G(s) : G(s) = G (s)[I - E tt (s)]-\ £„(;«) < e ti (co)A(jco)} 
£ u (w) = max <x max {I -[G k {jm)T\G (/«)]}, fc = l,2 N 

Inverse multiplicative output uncertainty: 

n w := {G(s) : G{s) = [I - E <o (s)]- 1 G (s), £,„(;«) < ^(a>)2l(;fl>)} 
f,>) = max ff^d-IGoOfflHIG^i^r 1 }, * = 1,2 N 



(42) 



(43) 



(44) 



(45) 



(46) 



Standard feedback configuration with uncertain plant modelled using any above 
unstructured uncertainty form can be recast into the M-A structure (for additive 
perturbation Fig. 2) where M(s) is the nominal model and A(s) e R mxm is the norm-bounded 
complex perturbation. 

If the nominal closed-loop system is stable then M(s) is stable and A(s) is a perturbation 
which can destabilize the system. The following theorem establishes conditions on M(s) so 
that it cannot be destabilized by A(s) (Skogestad & Postlethwaite, 2005). 




R(8) 



la 



yd 



A(s) -, 



Go(s) 



"D 



•6 



^> 







A(s) 




U A 










— ► 


M(s) 





Ya 



Fig. 2. Standard feedback configuration with unstructured additive uncertainty (left) recast 
into the M-A structure (right) 

Theorem 3.2 (Robust stability for unstructured perturbations) 

Assume that the nominal system M(s) is stable (nominal stability) and the perturbation 
A(s) is stable. Then the M-A system in Fig. 2 is stable for all perturbations 
A ( S ) ' °max ( A ) < 1 if and onl Y if 



Robust Controller Design: New Approaches in the Time and the Frequency Domains 231 

<WM(;«)]<1, Vo) (47) 

For individual uncertainty forms M(s) = l k M k (s), k = a,i,o,ia,ii,io ; the corresponding 
matrices M k (s) are given below (disregarding the negative signs which do not affect 
resulting robustness condition); commonly, the nominal model G (s) is obtained as a model 
of mean parameter values. 

M(s) = £ a (s)R(s)[I + G^Ris)]' 1 = £ a (s)M a (s) additive uncertainty (48) 

M(s) = £ i (s)R(s)[I + G (s)R(s)]~ 1 G (s) = £ i (s)M i (s) multiplicative input uncertainty (49) 

M(s) = £ (s)G (s)R(s)[I + G (s)R(s)]~ 1 = £ (s)M (s) multiplicative output uncertainty (50) 

M(s) = £ ia (s)[I + Gq^R^s^G^s) = £ ia (s)M ia (s) inverse additive uncertainty (51) 

M(s) = I a (s) [I + R(s)G (s)] _1 = In (s)Mu (s) inverse multiplicative input uncertainty (52) 

M(s) = £ io (s)[I + G (s)R(s)]~ 1 = £ io (s)M io (s) inverse multiplicative output uncertainty (53) 

Conservatism of the robust stability conditions can be reduced by structuring the 
unstructured additive perturbation by introducing the additive affine-type uncertainty E a r(s) 
that brings about new way of nominal system computation and robust stability conditions 
modifiable for the decentralized controller design as (Kozakova & Vesely, 2007; 2008). 



v 

I 

jinxm ■ 



E af (s) = Z G i( s )li (54) 



where Gi(s) e R mxm t i=0,l, . . ., p are stable matrices, p is the number of uncertainties defining 
2 p polytope vertices that correspond to individual perturbed models; c\i are polytope 
parameters. The set IJ a r generated by the additive affine-type uncertainty (E a f) is 

v 

n af '= l G ( S ) : G ( S ) = G o( S ) + E af E af = Z G *( S )^' °\i G< ^min^fmax >> limin + ^max = °l ( 55 ) 

i=\ 

where G (s) is the „afinne // nominal model. Put into vector-matrix form, individual 
perturbed plants (elements of the set IT a r ) can be expressed as follows 



G(s) = G (s) + [I ql ...I qp ] 



Gi(s) 

G P (s) 



-G (s) + QG u (s) (56) 



(mxp)xm 



where Q = [I qi ... ij e R'"^ , \=^\ m%m , G„(s) = [G 1 ...G p f eR 

Standard feedback configuration with uncertain plant modelled using the additive affine 
type uncertainty is shown in Fig. 3 (on the left); by analogy with previous cases, it can be 
recast into the M a f - Q structure in Fig. 3 (on the right) where 



232 



Robust Control, Theory and Applications 



M af = G U R(I + GoR)" 1 = G U (I + RG )" a R 



(57) 




R(s) 



Yq 
r* G u (s) -+ q -i 




U Q 




Go(s) 


— CH 


y 



^> 



UQ 



Maf(s) 



Yq 



Fig. 3. Standard feedback configuration with unstructured af fine-type additive uncertainty 
(left), recast into the M a f -Q structure (right) 

Similarly as for the M-A system, stability condition of the M fl r - Q system is obtained as 



<(M af Q) 



<1 



(58) 



Using singular value properties, the small gain theorem, and the assumptions that 
% = | <7imin| H^zmax| anc * tne nom i na l model M a f(s) is stable, (58) can further be modified to 
yield the robust stability condition 



K(M af )q Jp<l 



(59) 



The main aim of Section 3 of this chapter is to solve the next problem. 
Problem 3.1 



Consider an uncertain system with m subsystems given as a set of N transfer function 
matrices obtained in N working points of plant operation, described by a nominal model 
G (s) and any of the unstructured perturbations (41) - (46) or (55). 

Let the nominal model G (s) can be split into the diagonal part representing mathematical 
models of decoupled subsystems, and the off-diagonal part representing interactions 
between subsystems 



G (s) = G d (s) + G m (s) 



where 



G i (s) = diag{G l (8)) mm ,detG i (s)*0 Vs G m (s) = G (s)-G d (s) 
A decentralized controller 

R(s) = diag{R,(s)} mxm . detR(s) * Vs e D 



(60) 



(61) 



(62) 



is to be designed with R^s) being transfer function of the i-th local controller. The designed 
controller has to guarantee stability over the whole operating range of the plant specified by 
either (41) - (46) or (55) (robust stability) and a specified performance of the nominal model 
(nominal performance). To solve the above problem, a frequency domain robust 
decentralized controller design technique has been developed (Kozakova & Vesely, 2009; 
Kozakova et. al., 2009b); the core of it is the Equivalent Subsystems Method (ESM). 



Robust Controller Design: New Approaches in the Time and the Frequency Domains 



233 



3.2 Decentralized controller design for performance: equivalent subsystems method 

The Equivalent Subsystems Method (ESM) an original Nyquist-based DC design method for 
stability and guaranteed performance of the full system. According to it, local controller 
designs are performed independently for so-called equivalent subsystems that are actually 
Nyquist plots of decoupled subsystems shaped by a selected characteristic locus of the 
interactions matrix. Local controllers of equivalent subsystems independently tuned for 
stability and specified feasible performance constitute the decentralized controller 
guaranteeing specified performance of the full system. Unlike standard robust approaches, 
the proposed technique considers full mean parameter value nominal model, thus reducing 
conservatism of resulting robust stability conditions. In the context of robust decentralized 
controller design, the Equivalent Subsystems Method (Kozakova et. al., 2009b) is applied to 
design a decentralized controller for the nominal model Go(s) as depicted in Fig. 4. 

G (s) 




— <J^ 



Fig. 4. Standard feedback loop under decentralized controller 

The key idea behind the method is factorisation of the closed-loop characteristic polynomial 
detF(s) in terms of the split nominal system (60) under the decentralized controller (62) 
(existence of R _1 (s) is implied by the assumption (62) that det R(s) ^ ) 



detF(s) = det {I + [G d (s) + G m (s)]R(s) } =det[R- 1 (s) + G d (s) + G m (s)]detR(s) (63) 



Denote 



where 



F 1 (s) = R- 1 (s) + G d (s) + G m (s) = P( S ) + G m (s) 



(64) 



P( S ) = R-\ S ) + G d (s) 



(65) 



is a diagonal matrix P(s) = diag{p i (s)} mxm . Considering (63) and (64), the stability condition 
(37b) in Theorem 3.1 modifies as follows 



N{0, det[P(s) + G m (s)]} + N[0, det R(s)] = n q 
and a simple manipulation of (65) yields 



(66) 



234 Robust Control, Theory and Applications 

I + R(s)[G d (s) - P(s)] = I + R(s)G^(s) = (67) 

where 

G a He) = diag{G?(8)} mxm =diag{G i {8)-p i {8)} mxm « = 1 m (68) 

is a diagonal matrix of equivalent subsystems Gf (s) ; on subsystems level, (67) yields m 
equivalent characteristic polynomials 

CLCP t eq (s) = 1 + R f (s)Gf (s) f = 1,2,... ,ra (69) 

Hence, by specifying P(s) it is possible to affect performance of individual subsystems 

(including stability) through R' 1 (s) . In the context of the independent design philosophy, 

design parameters p i (s),i = l,2,...,m represent constraints for individual designs. General 

stability conditions for this case are given in Corollary 3.1. 

Corollary 3.1 (Kozakova & Vesely, 2009) 

The closed-loop in Fig. 4 comprising the system (60) and the decentralized controller (62) is 

stable if and only if 

1. there exists a diagonal matrix P(s) = diag{p i {s)} i=1 such that all equivalent 
subsystems (68) can be stabilized by their related local controllers Rj(s), i.e. all 
equivalent characteristic polynomials CLCP^{s) = l + R i {s)G e i q {s) , i = l,2,...,m have 
roots with Re {s} < ; 

2. the following two conditions are met VseD : 

a. det[P(s) + G m (s)]*0 

b. N[0,detF(s)] = n q 

where detP(s) = det(l + G(s)R(s)) and n is the number of open loop poles with Re{s} >0 . 
In general, p t (s) are to be transfer functions, fulfilling conditions of Corollary 3.1, and the 
stability condition resulting form the small gain theory; according to it if both P^fs) and 
G m (s) are stable, the necessary and sufficient closed-loop stability condition is 

|p(s)- 1 G w (s)| < 1 or <r min [P(s)] > <x max [G m (s)] (71) 

To provide closed-loop stability of the full system under a decentralized controller, 
Pi (s), i = l,2,...,m are to be chosen so as to appropriately cope with the interactions G m (s) . 
A special choice of P(s) is addressed in (Kozakova et al.2009a;b): if considering characteristic 
functions g z (s)of G m (s) defined according to (35) for i = l,...,m, and choosing P(s) to be 
diagonal with identical entries equal to any selected characteristic function gk(s) of [-G m (s)], 
where /ce{l,...,ra} is fixed, i.e. 

P(s) = -g k (s)I, ke{l,...,m] is fixed (72) 

then substituting (72) in (70a) and violating the well-posedness condition yields 

m 

det[P(s) + G m (s)] = Yl[-g k (s) + gl (s)] = VseD (73) 



Robust Controller Design: New Approaches in the Time and the Frequency Domains 235 

In such a case the full closed-loop system is at the limit of instability with equivalent 
subsystems generated by the selected g k (s) according to 

GS(8) = G,(e) + g k (e) .=1,2 m, VseD (74) 



Similarly, if choosing P(s-a) = -g k (s-a)1 , 0<a<a m where a m denotes the maximum 
feasible degree of stability for the given plant under the decentralized controller R(s) , then 

m 

detF 1 (s-a) = n[-g l (s-ar) + g i (s-o)] = VseD (75) 

i=l 

Hence, the closed-loop system is stable and has just poles with Re{s} < -a , i.e. its degree of 
stability is a . Pertinent equivalent subsystems are generated according to 

GS(8-a) = G i (8-a) + g k (8-a) i=l,2 m (76) 

To guarantee stability, the following additional condition has to be satisfied simultaneously 

m m 

detF lk =H[-g k (s-a) + g,(s)] = Il r ,k( s )* VseD ( 77 ) 

Simply put, by suitably choosing a : < a < a m to generate P(s - a) it is possible to 
guarantee performance under the decentralized controller in terms of the degree of 
stability a . Lemma 3.1 provides necessary and sufficient stability conditions for the closed- 
loop in Fig. 4 and conditions for guaranteed performance in terms of the degree of stability. 
Definition 3.1 (Proper characteristic locus) 

The characteristic locus g k (s - a) of G m (s - a) , where fixed k e {l,...,m} and a > , is called 
proper characteristic locus if it satisfies conditions (73), (75) and (77). The set of all proper 
characteristic loci of a plant is denoted P s . 
Lemma 3.1 

The closed-loop in Fig. 4 comprising the system (60) and the decentralized controller (62) is 
stable if and only if the following conditions are satisfied \/seD , a > and 
fixed k g {!,..., m} : 

1. g k (s-a)eP s 

2. all equivalent characteristic polynomials (69) have roots with Res < -a ; 

3. N[0,detF(s-a)] = n qa 

where F(s -a) = I + G(s - a)R(s - a) ; n is the number of open loop poles with Re{s} > -a . 
Lemma 3.1 shows that local controllers independently tuned for stability and a specified 
(feasible) degree of stability of equivalent subsystems constitute the decentralized controller 
guaranteeing the same degree of stability for the full system. The design technique resulting 
from Corollary 3.1 enables to design local controllers of equivalent subsystems using any 
SISO frequency-domain design method, e.g. the Neymark D-partition method (Kozakova et 
al. 2009b), standard Bode diagram design etc. If considering other performance measures in 
the ESM, the design proceeds according to Corollary 3.1 with P(s) and 
G e ik q (s) = G i (s) + g k (s),i = l,2,...,m generated according to (72) and (74), respectively. 



236 Robust Control, Theory and Applications 

According to the latest results, guaranteed performance in terms of maximum overshoot is 
achieved by applying Bode diagram design for specified phase margin in equivalent 
subsystems. This approach is addressed in the next subsection. 

3.3 Robust decentralized controller design 

The presented frequency domain robust decentralized controller design technique is 
applicable for uncertain systems described as a set of transfer function matrices. The basic 
steps are: 

1. Modelling the uncertain system 

This step includes choice of the nominal model and modelling uncertainty using any 
unstructured uncertainty (41) -(46) or (55). The nominal model can be calculated either as the 
mean value parameter model (Skogestad & Postlethwaite, 2005), or the "affine" model, 
obtained within the procedure for calculating the affine-type additive uncertainty 
(Kozakova & Vesely, 2007; 2008). Unlike the standard robust approach to decentralized 
control design which considers diagonal model as the nominal one (interactions are 
included in the uncertainty), the ESM method applied in the design for nominal 
performance allows to consider the full nominal model. 

2. Guaranteeing nominal stability and performance 

The ESM method is used to design a decentralized controller (62) guaranteeing stability and 
specified performance of the nominal model (nominal stability, nominal performance). 

3. Guaranteeing robust stability 

In addition to nominal performance, the decentralized controller has to guarantee closed- 
loop stability over the whole operating range of the plant specified by the chosen 
uncertainty description (robust stability). Robust stability is examined by means of the M-A 
stability condition (47) or the M a f--Q stability condition (59) in case of the affine type additive 
uncertainty (55). 

Corollary 3.2 (Robust stability conditions under DC) 

The closed-loop in Fig. 3 comprising the uncertain system given as a set of transfer function 
matrices and described by any type of unstructured uncertainty (41) - (46) or (55) with 
nominal model fulfilling (60), and the decentralized controller (62) is stable over the 
pertinent uncertainty region if any of the following conditions hold 

1. for any (41)-(46), conditions of Corollary 3.1 and (47) are simultaneously satisfied where 
M(s) = £ k M k (s), k = a,i,o,ia,ii,io and M\ given by (48)-(53) respectively. 

2. for (55), conditions of Corollary 3.1 and (59) are simultaneously satisfied. 

Based on Corollary 3.2, two approaches to the robust decentralized control design have been 
developed: the two-stage and the direct approaches. 

1. The two stage robust decentralized controller design approach based on the M-A structure stability 
conditions (Kozakova & Vesely, 2008;, Kozakova & Vesely, 2009; Kozakova et al. 2009a). 

In the first stage, the decentralized controller for the nominal system is designed using ESM, 
afterwards, fulfilment of the M-A or M a f-Q stability conditions (47) or (59), respectively is 
examined; if satisfied, the design procedure stops, otherwise the second stage follows: either 
controller parameters are additionally modified to satisfy robust stability conditions in the 
tightest possible way (Kozakova et al. 2009a), or the redesign is carried out with modified 
performance requirements (Kozakova & Vesely, 2009). 



Robust Controller Design: New Approaches in the Time and the Frequency Domains 237 

2. Direct decentralized controller design for robust stability and nominal performance 

By direct integration of the robust stability condition (47) or (59) in the ESM, local controllers 
of equivalent subsystems are designed with regard to robust stability. Performance 
specification for the full system in terms of the maximum peak of the complementary 
sensitivity M T corresponding to maximum overshoot in individual equivalent subsystems 
is translated into lower bounds for their phase margins according to (78) (Skogestad & 
Postlethwaite, 2005) 



PM > 2 arcsin 



y2M T j 



>^-[rad] (78) 



where PM is the phase margin, Mr is the maximum peak of the complementary sensitivity 

T(s) = G(s)R(s)[I + G(s)R(s)]- 1 (79) 

As for MIMO systems 

M T =a max (T) (80) 

the upper bound for Mr can be obtained using the singular value properties in 
manipulations of the M-A condition (47) considering (48)-(53), or the M a f- Q condition (58) 
considering (57) and (59). The following upper bounds o- max [T (ja>)] for the nominal 
complementary sensitivity T (s) = G (s)R(s)[I + G (s)R(s)] _1 have been derived: 

°max[ T o(.H]< °" mi . n[Go ^ ] =L A {a>) \fco additive uncertainty (81) 

cr max [T (jco)] < r = L K (co), k = i, o, \/co multiplicative input/output uncertainty (82) 

\£ k (a>)\ 

a max [T (jcD)] < 1 gmin[gg(M! = L AF {co) Va additive affine-type uncertainty (83) 

Using (80) and (78) the upper bounds for the complementary sensitivity of the nominal 
system (81) -(83) can be directly implemented in the ESM due to the fact that performance 
achieved in equivalent subsystems is simultaneously guaranteed for the full system. The 
main benefit of this approach is the possibility to specify maximum overshoot in the full 
system guaranteeing robust stability in terms of cr mSLX (T ) , translate it into minimum phase 
margin of equivalent subsystems and design local controllers independently for individual 
single input - single output equivalent subsystems. 
The design procedure is illustrated in the next subsection. 

3.4 Example 

Consider a laboratory plant consisting of two interconnected DC motors, where each 
armature voltage (Mi, Mi) affects rotor speeds of both motors (oe>i, oo 2 ). The plant was 
identified in three operating points, and is given as a set 77 = {G 1 (s),G 2 (s),G 3 (s)} where 



238 



Robust Control, Theory and Applications 



G 1 (s) 



-0.402s + 2.690 

s 2 + 2.870s + 1.840 

0.003s -0.720 



0.006s -1.680 
+ 11.570s + 3.780 
-0.170s + 1.630 



- 9.850s + 1.764 s 2 + 1.545s + 0.985 



G 2 (s)- 



-0.342s + 2.290 

s 2 + 2.070s + 1.840 

0.003s -0.580 



0.005s -1.510 

s 2 + 10.570s + 3.780 

-0.160s + 1.530 



s z + 8.850s + 1.764 s z + 1.045s + 0.985 



G 3 (s) = 



-0.423s + 2.830 0.006s -1.930 

s 2 + 4.870s + 1.840 s 2 + 13.570s + 3.780 

0.004s -0.790 -0.200s + 1.950 

s 2 + 10.850s + 1.764 s 2 + 1.945s + 0.985 



In calculating the affine nominal model Go(s), all possible allocations of Gi(s), Gi(s), Gs(s) into 
the 2 2 = 4 polytope vertices were examined (24 combinations) yielding 24 affine nominal 
model candidates and related transfer functions matrices G^(s) needed to complete the 
description of the uncertainty region. The selected affine nominal model Go(s) is the one 
guaranteeing the smallest additive uncertainty calculated according to (41): 



G (s) = 



-0.413 s +2.759 

s 2 + 3.870s + 1.840 
0.004s -0.757 



-0.006s -1.807 

s 2 + 12.570s + 3.780 
-0.187s + 1.791 



s z + 10.350s + 1.764 s z + 1.745s + 0.985 



The upper bound L AF (co) for To(s) calculated according to (82) is plotted in Fig. 5. Its worst 
(minimum value) M T = minL Af (co) = 1.556 corresponds to PM > 37.48° according to (78). 



3.5 




Fig. 5. Plot of Laf(co) calculated according to (82) 

The Bode diagram design of local controllers for guaranteed PM was carried out for 
equivalent subsystems generated according to (74) using characteristic locus gi(s) of the 
matrix of interactions G m (s), i.e. G e i l(s) = G i (s) + g 2 (s) z = l,2. Bode diagrams of equivalent 



Robust Controller Design: New Approaches in the Time and the Frequency Domains 



239 



subsystems G^(s),G2i(s) are in Fig. 6. Applying the PI controller design from Bode diagram 
for required phase margin PM = 39° has yielded the following local controllers 



R a (s) = 



3.367 s +1.27 



R 2 (s) = 



1.803s + 0.491 



Bode diagrams of compensated equivalent subsystems in Fig. 8 prove the achieved phase 
margin. Robust stability was verified using the original M a /-Q condition (59) with p=2 and 
qo=l', as depicted in Fig. 8, the closed loop under the designed controller is robustly stable. 





















































CO 

S -20 

1 " 40 

-fin 








































































































~~ 


"-v 


■v 


■N 












































X 


N 


\ 






















































" 


\ 


■V 












































1 


\ 


*-s 


v 


<> 


> 


















































^ 









































U 




















































































































































— 



100 





















































































































































omega [rad/s] 



Fig. 6. Bode diagrams of equivalent subsystems Gl\(s) (left), G%[(s) (right) 





















II 










II 






















100 

2. 50 

1 o 

-50 


























































- 


— . 








- 


~-—~ 








- 






























~ 


■ — .. 


*^ 






- 


■<■* — 








- 


-> 




















































«^ 




- 


■^ 








- 
















































^ 



10 10 

co [rad/sec] 



10" 10 u 

© [rad/sec] 




10- 10 l 

co [rad/sec] 



Fig. 7. Bode diagrams of equivalent subsystems GH(s) (left), G%[(s) (right) under designed 
local controllers Ri(s), R2(s), respectively. 



240 



Robust Control, Theory and Applications 




Fig. 8. Verification of robust stability using condition (59) in the form <r max (M a r) < 



^ 



4. Conclusion 

The chapter reviews recent results on robust controller design for linear uncertain systems 
applicable also for decentralized control design. 

In the first part of the chapter the new robust PID controller design method based on LMP is 
proposed for uncertain linear system. The important feature of this PID design approach is 
that the derivative term appears in such form that enables to consider the model 
uncertainties. The guaranteed cost control is proposed with a new quadratic cost function 
including the derivative term for state vector as a tool to influence the overshoot and 
response rate. 

In the second part of the chapter a novel frequency-domain approach to the decentralized 
controller design for guaranteed performance is proposed. Its principle consists in including 
plant interactions in individual subsystems through their characteristic functions, thus 
yielding a diagonal system of equivalent subsystems. Local controllers of equivalent 
subsystems independently tuned for specified performance constitute the decentralized 
controller guaranteeing the same performance for the full system. The proposed approach 
allows direct integration of robust stability condition in the design of local controllers of 
equivalent subsystems. 
Theoretical results are supported with results obtained by solving some examples. 

5. Acknowledgment 

This research work has been supported by the Scientific Grant Agency of the Ministry of 
Education of the Slovak Republic, Grant No. 1/0544/09. 



6. References 

Blondel, V. & Tsitsiklis, J.N. (1997). NP-hardness of some linear control design problems. 

SIAM J. Control Optim., Vol. 35, 2118-2127. 
Boyd, S.; El Ghaoui, L.; Feron, E. & Balakrishnan, V. (1994). Linear matrix inequalities in system 

and control theory, SIAM Studies in Applied Mathematics, Philadelphia. 



Robust Controller Design: New Approaches in the Time and the Frequency Domains 241 

Crusius, C.A.R. & Trofino, A. (1999). LMI Conditions for Output Feedback Control 

Problems. IEEE Trans. Aut. Control Vol. 44, 1053-1057. 
de Oliveira, M.C.; Bernussou, J. & Geromel, J.C. (1999). A new discrete-time robust stability 

condition. Systems and Control Letters, Vol. 37, 261-265. 
de Oliveira, M.C.; Camino, J.F. & Skelton, R.E. (2000). A convexifying algorithm for the 

design of structured linear controllers, Proc. 39 nd IEEE CDC, pp. 2781-2786, Sydney, 

Australia, 2000. 
Ming Ge; Min-Sen Chiu & Qing-Guo Wang (2002). Robust PID controller design via LMI 

approach. Journal of Process Control, Vol.12, 3-13. 
Grman, L. ; Rosinova, D. ; Kozakova, A. & Vesely, V. (2005). Robust stability conditions for 

polytopic systems. International Journal of Systems Science, Vol. 36, No. 15, 961-973, 

ISSN 1464-5319 (electronic) 0020-7721 (paper) 
Gyurkovics, E. & Takacs, T. (2000). Stabilisation of discrete-time interconnected systems 

under control constraints. IEE Proceedings - Control Theory and Applications, Vol. 147, 

No. 2, 137-144 
Han, J. & Skelton, R.E. (2003). An LMI optimization approach for structured linear 

controllers, Proc. 42** IEEE CDC, 5143-5148, Hawaii, USA, 2003 
Henrion, D.; Arzelier, D. & Peaucelle, D. (2002). Positive polynomial matrices and improved 

LMI robustness conditions. 15 th IF AC World Congress, CD-ROM, Barcelona, Spain, 

2002 
Kozakova, A. & Vesely, V. (2007). Robust decentralized controller design for systems with 

additive af fine-type uncertainty. Int. J. of Innovative Computing, Information and 

Control (IJICIC), Vol. 3, No. 5 (2007), 1109-1120, ISSN 1349-4198. 
Kozakova, A. & Vesely, V. (2008). Robust MIMO PID controller design using additive 

affine-type uncertainty. Journal of Electrical Engineering, Vol. 59, No.5 (2008), 241- 

247, ISSN 1335 - 3632 
Kozakova, A., Vesely, V. (2009). Design of robust decentralized controllers using the M-A 

structure robust stability conditions. Int. Journal of Systems Science, Vol. 40, No.5 

(2009), 497-505, ISSN 1464-5319 (electronic) 0020-7721 (paper). 
Kozakova, A.; Vesely, V. & Osusky, J. (2009a). A new Nyquist-based technique for tuning 

robust decentralized controllers, Kybernetika, Vol. 45, No.l (2009), 63-83, ISSN 0023- 

5954. 
Kozakova, A.; Vesely, V. Osusky, J. (2009b). Decentralized Controllers Design for 

Performance: Equivalent Subsystems Method, Proceedings of the European Control 

Conference, ECC09, 2295-2300, ISBN 978-963-311-369-1, Budapest, Hungary August 

2009, EUCA Budapest. 
Peaucelle, D.; Arzelier, D.; Bachelier, O. & Bernussou, J. (2000). A new robust D-stability 

condition for real convex polytopic uncertainty. Systems and Control Letters, Vol. 40, 

21-30 
Rosinova, D.; Vesely, V. & Kucera, V. (2003). A necessary and sufficient condition for static 

output feedback stabilizability of linear discrete-time systems. Kybernetika, Vol. 39, 

447-459 
Rosinova, D. & Vesely, V. (2003). Robust output feedback design of discrete-time systems - 

linear matrix inequality methods. Proceedings 2 th IF AC Conf. CSD'03 (CD-ROM), 

Bratislava, Slovakia, 2003 



242 Robust Control, Theory and Applications 

Skelton, R.E.; Iwasaki, T. & Grigoriadis, K. (1998). A Unified Algebraic Approach to Linear 

Control Design, Taylor and Francis, Ltd, London, UK 
Skogestad, S. & Postlethwaite, I. (2005). Multivariable fedback control: analysis and design, John 

Wiley & Sons Ltd., ISBN -13978-0-470-01167-6 (H/B), The Atrium, Southern Gate. 

Chichester, West Sussex, UK 
Vesely, V. (2003). Robust output feedback synthesis: LMI Approach, Proceedings 2 th IF AC 

Conference CSD'03 (CD-ROM), Bratislava, Slovakia, 2003 
Zheng Feng; Qing-Guo Wang & Tong Heng Lee (2002). On the design of multivariable PID 

controllers via LMI approach. Automatica, Vol. 38, 517-526 



11 



Robust Stabilization and Discretized PID Control 

Yoshifumi Okuyama 

Tottori University, Emeritus 
Japan 



1. Introduction 

At present, almost all feedback control systems are realized using discretized 
(discrete-time and discrete-value, i.e., digital) signals. However, the analysis and design 
of discretized /quantized control systems has not been entirely elucidated. The first attempt 
to elucidate the problem was described in a paper by Kalman (1) in 1956. Since then, 
many researchers have studied this problem, particularly the aspect of understanding and 
mitigating the quantization effects in quantized feedback control, e.g.,(2-4). However, few 
results have been obtained for the stability analysis of the nonlinear discrete-time feedback 
system. 

This article describes the robust stability analysis of discrete-time and discrete-value control 
systems and presents a method for designing (stabilizing) PID control for nonlinear 
discretized systems. The PID control scheme has been widely used in practice and theory 
thus far irrespective of whether it is continuous or discrete in time (5; 6) since it is a basic 
feedback control technique. 

In the previous study (7-9), a robust stability condition for nonlinear discretized control 
systems that accompany discretizing units (quantizers) at equal spaces was examined in a 
frequency domain. It was assumed that the discretization is executed at the input and output 
sides of a nonlinear continuous elemet (sensor /actuator) and that the sampling period is 
chosen such that the size is suitable for discretization in the space. This paper presents a 
designing problem for discretized control systems on a grid pattern in the time and controller 
variables space. In this study, the concept of modified Nyquist and Nichols diagrams for 
nonlinear control systems given in (10; 11) is applied to the designing procedure in the 
frequency domain. 




Fig. 1. Nonlinear discretized PID control system. 



244 Robust Control, Theory and Applications 

2. Discretized control system 

The discretized control system in question is represented by a sampled-data (discrete-time) 
feedback system as shown in Fig. 1. In the figure, G(z) is the z-transform of continuous plant 
G(s) together with the zero-order hold, C(z) is the z-transform of the digital PID controller, 
and V\ and V^ are the discretizing units at the input and output sides of the nonlinear 
element, respectively 

The relationship between e and w + = N^(e) is a stepwise nonlinear characteristic on an 
integer-grid pattern. Figure 2 (a) shows an example of discretized sigmoid-type nonlinear 
characteristic. For C-language expression, the input /output characteristic can be written as 

e f = 7 * (double) (int)(e/ 7) 

u = 0.4 * e f + 3.0 * atan(0.6 * e f ) (1) 

u f = 7 * (double) (int)(w/7), 

where (int) and (double) denote the conversion into an integral number (a round-down 
discretization) and the reconversion into a double-precision real number, respectively. Note 
that even if the continuous characteristic is linear, the input /output characterisitc becomes 
nonlinear on a grid pattern as shown in Fig. 2 (b), where the linear continuous characteristic 
is chosen asu = 0.85 * e + . 

In this study, a round-down discretization, which is usually executed on a computer, is 
applied. Therefore, the relationship between e + and w + is indicated by small circles on the 
stepwise nonlinear characteristic. Here, each signal e + , w + , • • • can be assigned to an integer 
number as follows: 



e + e{... 


, -37, -27, -7,0, 7, 27, 37, • ■ 


■•}, 


« + €{••• 


,-37,-27,-7,0, 7, 27, 37,- 


••}. 



where 7 is the resolution of each variable. Without loss of generality, hereafter, it is assumed 
that 7 = 1.0. That is, the variables e + , w + , • • • are defined by integers as follows: 

e f ,u f eZ, Z= { 3,-2, -1,0, 1,2,3,- ••}. 

On the other hand, the time variable t is given as t £ {0, h, 2/z, 3h, • • •} for the sampling period 
h. When assuming h = 1.0, the following expression can be defined: 

tez +f Z+ = {0,1,2,3,- ••}. 

Therefore, each signal e^it), u*(t), • • • traces on a grid pattern that is composed of integers in 
the time and controller variables space. 
The discretized nonlinear characteristic 

u f = N d (e f ) =Ke f + g(e f ), < K < 00, (2) 

as shown in Fig. 2(a) is partitioned into the following two sections: 

\g(e*)\<g<™, (3) 

for \e f \ < £, and 

\g(e*)\<p\e*\, 0<p<K, (4) 



Robust Stabilization and Discretized PID Control 



245 























ft 


'(V 


) 






































J\£lL 






























































































. 










































































































































































































































X 




'-''" 






















J 








































J 














X* 




'- 








































































l s 


_ 


-' 




































-* 




































V 








































- 




































-'■'X- 










































































/'> 




























-J 












ii 










■i 








e 










-f 












n 


















fl 
















-' 






































- '> ^" 


































- 


























































































































■^ ,* 






































































-' 




y 




















































J 
























s^- 












J 






























































X 








































































































































































































/ 

















































































(a) (b) 

Fig. 2. Discretized nonlinear characteristics on a grid pattern. 

for |e + | > e. (In Fig. 2 (a) and (b), the threshold is chosen as £ = 2.0.) 

Equation (3) represents a bounded nonlinear characteristic that exists in a finite region. On 
the other hand, equation (4) represents a sectorial nonlinearity for which the equivalent linear 
gain exists in a limited range. It can also be expressed as follows: 



< g(e r y < pe tz < Ke 



t2 



J2 



(5) 



When considering the robust stability in a global sense, it is sufficient to consider the nonlinear 
term (4) for |e + | > £ because the nonlinear term (3) can be treated as a disturbance signal. (In 
the stability problem, a fluctuation or an offset of error is assumed to be allowable in |e + 1 < e.) 



"1 



1 + qS 



£*(') 



+ 



+ 



-j 8(e) 



Fig. 3. Nonlinear subsystem g(e). 





-) e * 




**(•) 




P* 




J 






















y' 


UTfa n *r\ 


■ ^ ( 


+ 










J 


" L 



Fig. 4. Equivalent feedback system. 



246 Robust Control, Theory and Applications 

3. Equivalent discrete-time system 

In this study, the following new sequences e^ (k) and v^ (k) are defined based on the above 
consideration: 

e«(k)=e$ l (k)+q-^-, (6) 

„;+(*) = *+,(*) -ft. ^!M, (7) 

where q is a non-negative number, eJi(fc) and »J,();) are neutral points of sequences e + (fc) and 

iit) _ ,* w+ . >,>-,) _ (8) 

,i,w - sMta, (9) 

and Ae + (/c) is the backward difference of sequence e f (k), that is, 

Ae\k) =e\k)-e\k-l). (10) 

The relationship between equations (6) and (7) with respect to the continuous values is shown 
by the block diagram in Fig. 3. In this figure, S is defined as 

*v> - r tt£- (11) 

Thus, the loop transfer function from z?* to e* can be given by W(f},q,z), as shown in Fig. 4, 
where 

(l + ^(z))G(z)C(z) 
WUM/ZJ i + (K + ^( z ))G(z)C(z)' U) 

and r 7 , d' are transformed exogenous inputs. Here, the variables such as v*, u' and y' written 
in Fig. 4 indicate the z-transformed ones. 

In this study, the following assumption is provided on the basis of the relatively fast sampling 
and the slow response of the controlled system. 

[Assumption] The absolute value of the backward difference of sequence e(k) does not 
exceed 7, i.e., 

\Ae(k)\ = \e(k)-e(k-l)\< 7 . (13) 

If condition (13) is satisfied, Ae*(k) is exactly ±7 or because of the discretization. That is, the 
absolute value of the backward difference can be given as 

\Ae\k)\ = \e\k) - e\k - 1)\ = 7 or 0. □ 

The assumption stated above will be satisfied by the following examples. The phase trace of 
backward difference Ae + is shown in the figures. 



Robust Stabilization and Discretized PID Control 



247 




Fig. 5. Nonlinear characteristics and discretized outputs. 

4. Norm inequalities 

In this section, some lemmas with respect to an £2 norm of the sequences are presented. Here, 
we define a new nonlinear function 

f(e)~g(e) + fle. (14) 

When considering the discretized output of the nonlinear characteristic, z? + = g(e f ), the 
following expression can be given: 

f(e\k))=v i (k)+lie\k). (15) 

From inequality (4), it can be seen that the function (15) belongs to the first and third 
quadrants. Figure 5 shows an example of the continuous nonlinear characteristics u = N(e) 
and f(e), the discretized outputs u f = N^(e f ) and f(e f ), and the sector (4) to be considered. 
When considering the equivalent linear characteristic, the following inequality can be defined: 

< 1K*) := =^p < 2/5. (16) 

When this type of nonlinearity ip(k) is used, inequality (4) can be expressed as 

S(k)=g(e*(k)) = (ip(k)-ll)e\k). (17) 

For the neutral points of e*(k) and v^{k), the following expression is given from (15): 

\{f(e\k))+f{e\k-l)))=vl l {k)+^l l (k). (18) 

Moreover, equation (17) is rewritten as v^ m (k) = (ip(k) — /3)ejj(fc). Since |ej,(fc)| < \e m (k)\, the 
following inequality is satisfied when a round-down discretization is executed: 



\vt(k)\ < fS\el(k)\ < P\e m (k)\. 



(19) 



248 Robust Control, Theory and Applications 

Based on the above premise, the following norm conditions are examined 
[Lemma-1] The following inequality holds for a positive integer p: 

\\vi(k)\\2.p < P\\em(k)h P < P\\e m (k)\\ 2 , p . (20) 

Here, || • \\ 2/ p denotes the Euclidean norm, which can be defined by 

f p \ 1/2 

Mk)h P :=[E^( k ) 

\k=l 

(Proof) The proof is clear from inequality (19). D 

[Lemma-2] If the following inequality is satisfied with respect to the inner product of the 

neutral points of (15) and the backward difference: 

{vl,(k)+llel I (k),Ae*(k)) p >0, (21) 

the following inequality can be obtained: 

\KKk)h, v <f>\\e^{k)h, v (22) 

for any q > 0. Here, (-r)p denotes the inner product, which is defined as 

(xi(k),x 2 (k)) p =£x 1 (k)x 2 (k). 
fc=i 

(Proof) The following equation is obtained from (6) and (7): 

2 ll#(*)lli P - Wtfmlp = 2 ll4,(*)lll P - \W m {k)f %v + *& • (vUk) + ^ m (k)Ae\k)) p . 

(23) 
Thus, (22) is satisfied by using the left inequality of (20). Moreover, as for the input of g* (•), 
the following inequality can be obtained from (23) and the right inequality (20): 

\K\m2, P < P\K(k)h P . (24) 

□ 

The left side of inequality (21) can be expressed as a sum of trapezoidal areas. 
[Lemma-3] For any step p, the following equation is satisfied: 

<V) := {vl(k)+ftel{k)M(k) ) v = \ E(f(e f (k))+f(e\k-l)))Ae\k). (25) 

z k=i 

(Proof) The proof is clear from (18). □ 

In order to understand easily, an example of the sequences of continuous /discretized signals 
and the sum of trapezoidal areas is depicted in Fig. 6. The curve e and the sequence of circles 
e + show the input of the nonlinear element and its discretized signal. The curve u and the 
sequence of circles w + show the corresponding output of the nonlinear characteristic and its 
discretized signal, respectively. As is shown in the figure, the sequences of circles e + and 
u f trace on a grid pattern that is composed of integers. The sequence of circles v f shows 



Robust Stabilization and Discretized PID Control 



249 









p 












ieUflff 
































/I \, 














/'I Jl\ / ^vL 






/T^w -^m i 






f_J° ° ^J**-— ^^C* * X\ J 


*V_ 




' 20 '\ . o^Jftt"^^^^. 




1,1114 f '' '' $Kn*n 


SSSJy 






0/ 




t 


\ or 




i 




. 5 . 'u 


\ 


iii::::::::p 




1 u 1 






y 




Re) 




k^"""~ "" in 






'f(^) 




' ! 7 


t 




V 






II III II III rill 





Fig. 6. Discretized input/output signals of a nonlinear element. 

the discretized output of the nonlinear characteristic g(-). The curve of shifted nonlinear 
characteristic f(e) and the sequence of circles f(e f ) are also shown in the figure. 
In general, the sum of trapezoidal areas holds the following property. 

[Lemma-4] If inequality (13) is satisfied with respect to the discretization of the control 
system, the sum of trapezoidal areas becomes non-negative for any p, that is, 

a(p) > 0. (26) 

(Proof) Since f(e f (k)) belongs to the first and third quadrants, the area of each trapezoid 



r(k) :=^(/(e + W) + /(e + (fc-l)))A £ + (fc) 



(27) 



On the other hand, the trapezoidal area r(k) is non-positive when e(k) decreases (increases) 
in the first (third) quadrant. Strictly speaking, when (e{k) > and Ae(k) > 0) or 
(e(k) < and Ae(k) < 0), r(k) is non-negative for any k. On the other hand, when 
(e(k) > and Ae(k) < 0) or (e(k) < and Ae(k) > 0), r(k) is non-positive for any k. Here, 
Ae(k) > corresponds to Ae f (k) = j or (and Ae(k) < corresponds to Ae f (k) = —7 or 0) 
for the discretized signal, when inequality (13) is satisfied. 
The sum of trapezoidal area is given from (25) as: 



a{p) 



E T (*)- 

k=l 



(28) 



Therefore, the following result is derived based on the above. The sum of trapezoidal areas 
becomes non-negative, a{p) > 0, regardless of whether e(k) (and e + (k)) increases or decreases. 
Since the discretized output traces the same points on the stepwise nonlinear characteristic, 
the sum of trapezoidal areas is canceled when e(k) (and e*(k) decreases (increases) from a 
certain point (e f (k),f(e f (k) ) ) in the first (third) quadrant. (Here, without loss of generality, the 
response of discretized point (e + (k) , / (e f (k))) is assumed to commence at the origin.) Thus, 
the proof is concluded. □ 



250 Robust Control, Theory and Applications 

5. Robust stability in a global sense 

By applying a small gain theorem to the loop transfer characteristic (12), the following robust 
stability condition of the discretized nonlinear control system can be derived 

[Theorem] If there exists a q > in which the sector parameter f> with respect to nonlinear 
term g(-) satisfies the following inequality, the discrete-time control system with sector 
nonlinearity (4) is robust stable in an £2 sense: 

B < Bq = K • t](qo,coo) = max min K • rj(q,cv), (29) 

q co 

when the linearized system with nominal gain K is stable. 
The //-function is written as follows: 

-qCl sin + \/q 2 CL 2 sin 2 6 + p 2 + 2p cos 6 + 1 
*l(q,oo) := , Vo; £ [Q,oo c ], (30) 

where Cl(co) is the distorted frequency of angular frequency co and is given by 

<5(e^)=;0(o;)=;|tan^) / ; = v^T (31) 

and co c is a cut-off frequency In addition, p(co) and 6 (to) are the absolute value and the phase 
angle of KG(e^)C(e^), respectively 

(Proof) Based on the loop characteristic in Fig. 4, the following inequality can be given with 
respect to z = e^ h : 

Ikm(^)ll2,p < ^11^(^)112^ + ^11^(^)112^ + SUp|W(^,^,z)| - ||^^ + (z)||2,p- (32) 

z=l 

Here, r' m (z) and d' m (z) denote the z-transformation for the neutral points of sequences r' (k) 
and d'(k), respectively. Moreover, C\ and c^ are positive constants. 
By applying inequality (24), the following expression is obtained: 

l-/5.sup|W(/5, (? , 2 )|j||4( 2 )|| 2 , p < Cl ||4(z)|| 2 , p + c 2 ||d;„( 2 )|| 2 , p . (33) 

Therefore, if the following inequality (i.e., the small gain theorem with respect to £2 gains) is 
valid, 



|WQ3,<7,e 



jcvh\ 



{l+iqCL{co))P(e^ h )C{e^ h ) 



(l+jqn(to))p(to)ei e ^ 



K+ (K + ipqCL(co))p(co)ei e ( w ) 



1 



1 + (K + iPqO,(to))P{ei wh )C{ei u > h ) 

(34) 
the sequences e^ik), e m (k), e(k) and y(k) in the feedback system are restricted in finite values 
when exogenous inputs r(k), d(k) are finite and p — > 00. (The definition of £2 stable for 
discrete-time systems was given in (10; 11).) 
From the square of both sides of inequality (34), 

fi 2 p 2 (l + q 2 n 2 ) < (K + Kp cos 6-ppqCl sin 6) 2 + (Kp sin. 6 + PpqO, cos 6) 2 . 



Robust Stabilization and Discretized PID Control 



251 




Fig. 7. An example of modified Nichols diagram (M = 1.4, c q = 0.0,0.2, • • • ,4.0). 
Thus, the following quadratic inequality can be obtained: 

P 2 p 2 < -IfiKpqCt sinO + K 2 (I + p cos e) 2 + K 2 p 2 sin 2 0. 
Consequently, as a solution of inequality (35), 



-KqClsm6 + K\/q 2 n 2 sin 2 + p 2 + 2p cos + 1 
P < = Kt](cj,cv) 



(35) 



(36) 



□ 

6. Modified Nichols diagram 

In the previous papers,the inverse function was used instead of the ^/-function, i.e., 

Using the notation, inequality (29) can be rewritten as follows: 

M = £{qo,co ) = min max g(q,a>) < -. 

q cv p 

When q = 0, the ^-function can be expressed as: 



£(o,«>) 



Vi0 2 + 2pcose + l 



\T(ei wh )\, 



where T(z) is the complementary sensitivity function for the discrete-time system. 
ft is evident that the following curve on the gain-phase plane, 

£(0,a>) = M, (M: const.) 



(37) 



(38) 



(39) 



252 Robust Control, Theory and Applications 

corresponds to the contour of the constant M in the Nichols diagram. In this study, since an 
arbitrary non-negative number q is considered, the ^-function that corresponds to (38) and 
(39) is given as follows: 

M. (40) 



-qCi sin 6 + y q 2 Ci 2 sin 2 6 + p 2 + 2p cos 6 + 1 
From this expression, the following quadratic equation can be obtained: 

(M 2 - l)p 2 + 2pM(Mcos0 - qCLsinO) + M 2 = 0. (41) 

The solution of this equation is expressed as follows: 

M ,,, „ _ . rtN , M 



M 2 - 



-(Mcosfl - qCLsinO) ± — ^(McosO- ^Qsinfl) 2 - (M 2 - 1). (42) 



The modified contour in the gain-phase plane (0, p) is drawn based on the equation of (42). 
Although the distorted frequency Q is a function of w, the term qd = c q > is assumed to 
be a constant parameter. This assumption for M contours was also discussed in (11). Figure 
7 shows an example of the modified Nichols diagram for Cq > and M = 1.4. Here, GPi 
is a gain-phase curve that touches an M contour at the peak value (M p = g(0,<Vp) = 1.4). 
On the other hand, GP2 is a gain-phase curve that crosses the = —180° line and all the 
M contours at the gain crossover point P2. That is, the gain margin gM becomes equal to 
— 201og 10 M/(M + 1) = 4.68[dB]. The latter case corresponds to the discrete-time system in 
which Aizerman's conjecture is valid (14; 15). At the continuous saddle point P2, the following 
equation is satisfied: 

f d -^l) =0. (43) 

Evidently, the phase margin p^ is obtained from the phase crossover point Q2. 

7. Controller design 

The PID controller applied in this study is given by the following algorithm: 

k 
u c (k) = K v u\k) + Q £ u\j) + QAi/ + (£), (44) 

where Au f (k) = u f (k) — u f (k — 1) is a backward difference in integer numbers, and each 
coefficient is defined as 

K p , Q, Q e Z+, Z+ = {0,1,2,3 • • • }. 

Here, Kp, Q, and Q correspond to Kp, Kph/Tj, and KpTp/h in the following (discrete-time 
z-transform expression) PID algorithm: 

C( 2 )=K p (l + ^^ + ^l-^)). (45) 

We use algorithm (44) without division because the variables u f , u c , and coefficients Kp, Cj, 
Q are integers. 



Robust Stabilization and Discretized PID Control 



253 



Using the z-transform expression, equation (44) is written as: 

u c (z) = C(z)u(z) 

= (K v + Q(l + z" 1 + z" 2 + • • • ) + Q(l - z" 1 )) u{z). 

In the closed form, controller C(z) can be given as 

C(z)=K p + C i -—^— T +C d (l-z- 1 ) (46) 

1 — z 1 

for discrete-time systems. When comparing equations (45) and (46), Q and Q become equal 

to Kph/Tj and KpTjj/h, respectively. 

The design method adopted in this paper is based on the classical parameter specifications in 

the modified Nichols diagram. This method can be conveniently designed, and it is significant 

in a physical sense (i.e., mechanical vibration and resonance). 

Furthermore, in this article, PID-D 2 is considered. The algorithm is written as 



u c (k) = K p u\k) + Q £ u\j) + C dl Au\k) + Q 2 AV(/c), 

7=0 



where 



A 2 u\k) = Au\k) - Au\k - 1) = u\k) - 2u\k - 1) + u\k - 2). 
Thus, the controller C(z) can be given as 

C(z) = X p + Q. r -^ T + Q 1 (l-z- 1 ) + Q 2 (l-2z- 1 +z- 2 ) 
for discrete-time systems. 

8. Numerical examples 

[Example-l] Consider the following third order controlled system: 



G(s) 
where K x = 0.0002 = 2.0 x 10" 4 . 



(s + 0.04) (s + 0.2) (s + 0.4)' 





K p 


Q 


Q 


£o 


gM[dB] 


PM[deg] 


M p 


(i) 


100 











7.72 


34.9 


1.82 


(ii) 


100 


3 





0.98 


5.92 


23.8 


2.61 


(iii) 


100 


3 


120 





11.1 


35.4 


1.69 


(iv) 


50 











10.8 


48.6 


1.29 


(v) 


50 


2 





1.00 


7.92 


30.6 


1.99 


(vi) 


50 


2 


60 





13.3 


40.5 


1.45 



(47) 



(48) 



(49) 



Table 1. PID parameters for Example-l (gM- gain margins, Pm : phase margins, M p : peak 
values, /5q: allowable sectors). 



254 



Robust Control, Theory and Applications 





p / 

GP 2 /7 


^^^^^ llilK^ 


Wua 


% ^li^iw 


- -k /s^ -135 $ 

/ 

;/l0 


^^^^^ =j0^/ 


Wm0^ 



Fig. 8. Modified contours and gain-phase curves for Example-1 (M = 1.69, 



0.0,0.2, 



,4.0). 



The discretized nonlinear characteristic (discretized sigmoid, i.e. arc tangent (12)) is as shown 
in Fig. ?? (a). In this article, the resolution value and the sampling period are assumed to be 
7 = 1.0 and h = 1.0 as described in section 2. 

When choosing the nominal gain K = 1.0 and the threshold e = 2.0, the sectorial area of the 
discretized nonlinear characteristic for e < \e\ can be determined as [0.5, 1.5] drawn by dotted 
lines in the figure. Figure 8 shows gain-phase curves of KG(ei wh )C(ei wh ) on the modified 
Nichols diagram. Here, GPi, GP2, and GP3 are cases (i), (ii), and (iii), respectively. The PID 
parameters are specified as shown in Table 1. The gain margins gM, the phase margin p^ 
and the peak value Mp can be obtained from the gain crossover points P, the phase crossover 
points Q, and the points of contact with regard to the M contours, respectively. 
The max-min value j6q is calculated from (29) (e.g., (ii)) as follows: 



Bq = max min K • rj(q,co) 

q co 



K-rj(q 0/ cv ) = 0.98. 



Therefore, the allowable sector for nonlinear characteristic g(-) is given as [0.0, 1.98]. The 
stability of discretized control system (ii) (and also systems (i),(iii)) will be guaranteed. In this 
example, the continuous saddle point (43) appears (i.e., Aizerman's conjecture is satisfied). 
Thus, the allowable interval of equivalent linear gain K^ can be given as < 1Q < 1.98. In the 
case of (i) and (iii), /3q becomes not less than K. However, from the definition of (4), f> in the 
tables should be considered /3q = $ = 1.0. Figure 9 shows step responses for the three cases. In 
this figure, the time-scale line is drawn in 10/z increments because of avoiding indistinctness. 
Sequences of the input u f (k) and the output u\ of PID controller are also shown in the figure. 
Here, mJ(Zc) is drawn to the scale of 1/100. Figure 10 shows phase traces (i.e., sequences of 
(e(k),Ae(k)) and (e + ((fc), Ae + (k))). As is obvious from Fig. 10, assumtion (13) is satisfied. The 
step response (i) remains a sutained oscillation and an off-set. However, as for (ii) and (iii) the 
responses are improved by using the PID, especially integral (I: a summation in this paper) 
algorithm. 

The discretized linear characteristic as shown in Fig. ?? (b) is also considered here. In the 
figure, the sectorial area of the discretized characteristic for e < \e\ can be determined 
as [0.5, 0.85] drawn by dotted lines, and the nominal gain is given as K = 0.675. When 



Robust Stabilization and Discretized PID Control 



255 























































































































































































































i 


in 






W 


1 


















































J-L 


)_£ 
























































/ 


j"" 


\* 














^ 










^ 




f 






.^. 










„ 




^ 












J 


f 


\ 


V 




\ ^ 


4 


^ 






V/ 




>^ 




Vh- 


/ \ 






v/ 




^ 




^' 






r\-J 
































































































































j 
































































/ 




































































Uc 


u) 
































































































































































































u 


uJ 
















































n 












in, 








1 


in 


















?nn 














t 


-k 










\ 


t 






























































^ 
































































1 


t 

























































































































Fig. 9. Step responses for Example-1. 




Fig. 10. Phase traces for Example-1. 



normalizing the nominal gain for K = 1.0 (i.e., choosing the gain constant K2 = Ki/ 0.675), 
the sectorial area is determined as [0.74, 1.26]. In this case, an example of step responces is 
depicted in Fig. 11. The PID parameters used here are also shown in Table 1. 



[Example-2] Consider the following fourth order controlled system: 



G(s) 



(s + 0.04) (s + 0.2) (s + 0.4) (s + 1.0) ' 



(50) 



where K\ = 0.0002 = 2.0 x 10~ 4 . The same nonlinear characteristic and the nominal gain are 
chosen as shown in Example-1. 

Figure 12 shows gain-phase curves of KG(e^ coh )C(e^ coh ) on the modified Nichols diagram. 
Here, GPi, GP2, GP3 and GP4 are cases (i), (ii), (iii) and (iv) in Table 2, respectively. In this 
example, PID-D 2 control scheme is also used. The PID-D 2 parameters are specified as shown 



256 



Robust Control, Theory and Applications 











































































































































r 


\ 


f</rt 












\'{ 


M 










































/ 


7 


X 






















































"V 


























































/ 
































































/ 










(v 






















































/ 
































































/ 




\ 












(i\ 


') 












































' 




V 


S 
































































































































































































































































u 


UJ 




















































jr 


"4k 




































































W LaW I _L 1 J_ 














- - - 
















n 








jj 


in 














200 








r 


-* 





































































































































































































































































Fig. 11. Step responses for Example-1 (Discretized linear case). 







p 

10 




^GPx 


§!§§§^^B 




f^flB^ 


y\ \ \ \^ \^x s C^ , o^^^^_ 


-10 


^ -135 


e 


GPyK 

/ / / 


// / 
- P ^GP 4 



Fig. 12. Modified contours and gain-phase curves for Example-2 (M = 2.14, 

c q =0.0,0.2,... ,4.0). 

in the table. The max-min value /3q is calculated from (29) (e.g., (iv)) as follows: 



3n = max min K • rj(q,cv) 
q co 



K 'tl(q ,tv ) = 0.69. 



Therefore, the allowable sector for nonlinear characteristic g(-) is given as [0.0, 1.69]. The 
stability of discretized control system (ii) (and also systems (i),(iii),(iv)) will be guaranteed. 
In this example, the continuous saddle point (43) appears (i.e., Aizerman's conjecture is 
satisfied). Thus, the allowable interval of equivalent gain Kg can be given as < Kg < 1.69. 
As is shown in Fig. 13, the step response (i) remains a sutained oscillation and an off-set. 
However, as for (ii), (iii) and (iv) the responses are improved by using PI, PID and PID-D 2 
algorithm (D 2 : a second difference). 



Robust Stabilization and Discretized PID Control 



257 



r 


\ 
















_jh 


J . 
















_j 


\ - ^ 
















" J-0 


ti- M 


^-A 


vt 

(jjj) ^=~ — 


t~ 












\tJ-X 


3^ 
















IjfcH 


nA^- 












T 


VI \ 


¥ ^ 


/ \ /"^'' y ' 


V-^ 




--'" \ 




^^^"^ 


st 


-if A 




It - 












°°° "11 


















p-Hl 


















t 




atii.^ 














J° s 


*<? 


,*?■ 


, *" 












4 S 






. 












T r 


















J ■ 


1" 'a,r 


to 














JL 


■■ ~£ 


_iio 




_2JII] 






^t=A 


i 


fej" 
















■"i 























































Fig. 13. Step responses for Example-2. 





1° If 
/ G?3 / >^ 

^^J /GP 2 /^ 

»y)v / 


^>^^^:^^^^^^k 


(uu( 


\^5\. 


- =i0 / /^/ "135 ^ 

iff 
' f / 
j / / 

-10 If / 

// 





Fig. 14. Modified contours and gain-phase curves for Example-3 (M = 1.44, 
^=0.0, •••,4.0). 

[Example-3] Consider the following nonminimum phase controlled system: 



G{s) 



£ 2 (s + 0.2)(-s + 0.4) 
(s + 0.02) (s + 0.04) (s + 1.0)' 



(51) 





K p 


Q 


Qi 


Q2 


fa 


g M [dB] 


PMtdeg] 


M p 


(i) 


80 











¥> 


6.8 


37.2 


1.79 


(ii) 


80 


3 








0.69 


4.69 


20.9 


3.10 


(iii) 


80 


3 


60 





1.00 


6.63 


27.4 


2.26 


(iv) 


80 


3 


60 


120 


P 


7.76 


28.8 


2.14 



Table 2. PID-D 2 parameters for Example-2. 



258 



Robust Control, Theory and Applications 







u 


\ 










































' s 


A 










































/ 


\\ 












v( 


*) 




























p 


^i 




iiii 


) / ^ 


^ 


































i 


v \ 




/ 


^ 


SI!," 




s 


v. 
















V 










li 


\ s 


t 






_\ 




^ 






















■^ 






V : 


1 






7 




M 


■^ 






























*'. 






r 


/ 






























































































































= 




/ 














































ij 


c'O 




- *> 










j" 














■ 






. 




^ 








X 






























/ 






u/ 






































/ 


■„ 




u( 


J 




































i' 


■^ 








inr 














ZOO 














t-k 










* 






--=q 




































"\ 


% . 










































■* 





















































































Fig. 15. Step responses for Example-3. 

where K3 = 0.001 = 1.0 x 10 ~ 3 . Also, in this example, the same nonlinear characteristic and 
the nominal gain are chosen as shown in Example-1. The modified Nichols diagram with 
gain-phase curves of KG(e^ coh )C(eJ coh ) is as shown in Fig. 14. Here, GPi, GP2 and GP3 are 
cases (i), (ii), and (iii), and the PID parameters are specified as shown in Table 3. Figure 15 
shows time responses for the three cases. 

For example, in the case of (iii), although the allowable sector of equivalent linear gain is 
< Kg < 5.9, the allowable sector for nonlinear characteristic becomes [0.0, 1.44] as shown 
in Table 3. Since the sectorial area of the discretized nonlinear characteristic is [0.5, 1.5], the 
stability of the nonlinear control system cannot be guaranteed. The response for (iii) actually 
fluctuates as shown in Figs. 15 and 16. This is a counter example for Aizerman's conjecture. 

9. Conclusion 

In this article, we have described robust stabilization and discretized PID control for 
continuous plants on a grid pattern with respect to controller variables and time elapsed. 
A robust stability condition for nonlinear discretized feedback systems was presented along 
with a method for designing PID control. The design procedure employs the modified Nichols 
diagram and its parameter specifications. The stability margins of the control system are 
specified directly in the diagram. Further, the numerical examples showed that the time 
responses can be stabilized for the required performance. The concept described in this article 
will be applicable to digital and discrete-event control system in general. 





K p 


Q 


Q 


fa 


g M [dB] 


PM[deg] 


M p 


(i) 


100 








0.92 


15.5 


40.6 


1.44 


(ii) 


100 


2 





0.71 


14.7 


27.7 


2.09 


(iii) 


100 


4 


40 


0.44 


15.3 


18.1 


3.18 



Table 3. PID parameters for Example-3. 



Robust Stabilization and Discretized PID Control 



259 













1 


Ae 














r-TTV 

PrpRt 




e 










vl 

r 


\ / AH 
7 rf K 


WWJliJ 












l 







Fig. 16. Phase traces for Example-3. 



10. References 

[1] R. E. Kalman, "Nonlinear Aspects of Sampled-Data Control Systems", Proc. of the 

Symposium on Nonlinear Circuit Analysis, vol. VI, pp.273-313, 1956. 
[2] R. E. Curry, Estimation and Control with Quantized Measurements, Cambridge, MIT Press, 

1970. 
[3] D. F. Delchamps, "Stabilizing a Linear System with Quantized State Feedback", IEEE 

Trans, on Automatic Control, vol. 35, pp. 916-924, 1990. 
[4] M. Fu, "Robust Stabilization of Linear Uncertain Systems via Quantized Feedback", Proc. 

of IEEE Int. Conf. on Decision and Control, TuA06-5, 2003. 
[5] A. Datta, M.T. Ho and S.P Bhattacharyya, Structure and Synthesis of PID Controllers, 

Springer- Verlag, 2000. 
[6] F. Takemori and Y. Okuyama, "Discrete-Time Model Reference Feedback and PID 

Control for Interval Plants" Digital Control 2000:Past, Present and Future of PID Control, 

Pergamon Press, pp. 260-265, 2000. 
[7] Y. Okuyama, "Robust Stability Analysis for Discretized Nonlinear Control Systems in 

a Global Sense", Proc. of the 2006 American Control Conference, Minneapolis, USA, pp. 

2321-2326, 2006. 
[8] Y Okuyama, "Robust Stabilization and PID Control for Nonlinear Discretized Systems 

on a Grid Pattern", Proc. of the 2008 American Control Conference, Seattle, USA, pp. 

4746-4751, 2008. 
[9] Y Okuyama, "Discretized PID Control and Robust Stabilization for Continuous Plants", 

Proc. of the 17th IFAC World Congress, Seoul, Korea, pp. 1492-1498, 2008. 
[10] Y Okuyama et al., "Robust Stability Evaluation for Sampled-Data Control Systems with 

a Sector Nonlinearity in a Gain-Phase Plane" Int. J. of Robust and Nonlinear Control, Vol. 9, 

No. 1, pp. 15-32, 1999. 
[11] Y Okuyama et al., "Robust Stability Analysis for Non-Linear Sampled-Data Control 

Systems in a Frequency Domain", European Journal of Control, Vol. 8, No. 2, pp. 99-108, 

2002. 
[12] Y Okuyama et al., "Amplitude Dependent Analysis and Stabilization for Nonlinear 

Sampled-Data Control Systems", Proc. of the 15th IFAC World Congress, T-Tu-M08, 2002. 



260 Robust Control, Theory and Applications 

[13] Y. Okuyama, "Robust Stabilization and for Discretized PID Control Systems with 

Transmission Delay", Proc. of IEEE Int. Conf. on Decision and Control, Shanghai, P. R. China, 

pp. 5120-5126, 2009. 
[14] L. T. Grujic, "On Absolute Stability and the Aizerman Conjecture", Automatica, pp. 

335-349. 1981. 
[15] Y. Okuyama et ah, "Robust Stability Analysis for Nonlinear Sampled-Data Control 

Systems and the Aizerman Conjecture", Proc. of IEEE Int. Conf. on Decision and Control, 

Tampa, USA, pp. 849-852, 1998. 



12 



Simple Robust Normalized PI 

Control for Controlled Objects with 

One-order Modelling Error 

Makoto Katoh 

Osaka Institute of Technology 
Japan 



1. Introduction 



In this section, the small gain theorem is introduced as a background theory of this chapter. 
Then, a large mission on safety and a small mission on analytic solutions are introduced 
after indicating the some problems in discussing robust PI control systems. Moreover, the 
way how it came to be possible to obtain the analytic solution of PI control adjustment for 
the concrete robust control problems with uncertain modeling error which is impossible 
using the space theory for MIMO systems, is shown for a SISO system. The worst lines of 
closed loop gain margin were shown in a parameter plane. Finally, risk, merit and demerit 
of the robust control is discussed and the countermeasure for safeness of that is introduced. 
And some theme, eg., in the lag time system, the MIMO system and a class of non-linear 
system for expansion of the approach of this chapter is introduced. 

Many researchers have studied on many kinds of robust system recently. The basic 
robust stability concept is based on the small gain theorem (Zbou K. with Doyle F. C. 
and Glover K., 1996). The theorem insists that a closed loop system is internal (robust) 
stable sufficiently and necessary if the H^ norm of the nominal closed loop transfer 
function is smaller than the inverse of H^ norm of the any uncertainty of feedback 
elements. (Fig. 1) Moreover, the expansion of the theorem claims that a closed loop 
system is stable sufficiently if the product of H^ norms of open loop transfer functions 
is smaller than 1 when the forward and the feedback transfer functions are both stable. 





i — ► 


A( s ) 










+ i 


W^ 












AGO 


^ 


( 


^ 




^ 


K 



W(s) 1 

\ . + when IIaII < y if llAll < — then internal stable 

)< " llo ° " llo ° y 

Fig. 1. Feed back system configuration with unknown feedback element 

In MIMO state space models (A,B,C,D), a necessary and sufficient condition using LMI 
(Linear Matrix Inequality) for the above bounded norm of controlled objects is known 
as the following Bounded Real Lemma (Zhou K. And Khargonekar P.P., 1988) using the 
Riccati unequality and Shure complement. 



262 



Advances in Reinforcement Learning 



3P = P* >0 such that 



i + A'P 


PB 


C 


B T P 


Y 


D 


C 


D 


1 



:0o|G(s 



(0) 



A gain margin between the critical closed loop gain of a dependent type IP controller by 
the Furwits criteria and the analytical closed loop gain solution when closed loop Hardy 
space norm became 1, and the parametric stability margin (Bhattacharyya S. P., 
Chapellat H., and Keel L. H., 1994; Katoh 2010) on uncertain time constant and 
damping coefficient were selected in this chapter for its easiness and robustness 
although it was expected also using this lemma that internal stable concrete conditions 
for controlled objects and forward controllers may obtain. 

One of H^ control problems is described to obtain a robust controller K(s) when Hardy 
space norm of closed loop transfer function matrix is bounded like Fig.2 assuming 
various (additive, multiplicative, left co-prime factor etc.) uncertainty of controlled 
objects P(s) (Zbou K. with Doyle F. C. and Glover K., 1996). 




z = Ow 

* = P 11 +P 12 K(I-P 22 K) 1 P 21 |®L<7 



Fig. 2. Feed back system configuration for obtained robust control K(s) when Hardy space 
norm of closed loop transfer function matrix is bounded 

The purpose of this chapter for the robust control problem is to obtain analytical 
solution of closed loop gain of a dependent type IP controller and analyze robustness 
by closed loop gain margin for 2 nd order controlled objects with one-order feedback like 
(left co-prime factor) uncertainty as Fig.l in some tuning regions of IP controller when 
Hardy space norm of closed loop transfer function matrix is bounded less than 1. 
Though another basic robust problem is a cooperation design in frequency region 
between competitive sensitivity and co-sensitivity function, it was omitted in this 
chapter because a tuning region of IP control was superior for unknown input 
disturbance other tuning region was superior for unknown reference disturbance. 
However, there is some one not simple for using higher order controllers with many 
stable zeros and using the norm with window (Kohonen T., 1995, 1997) for I in Hardy 
space for evaluating the uncertainty of models. Then, a number of robust PI or PID 
controller and compensator design methods have recently been proposed. But, they are 
not considered on the modelling error or parameter uncertainty. 

Our given large mission is to construct safe robust systems using simple controllers and 
simple evaluating method of the uncertainty of models. Then, we have proposed robust 
PI controllers for controlled objects without stable zeros (Katoh M., 2008, 2009). Our 
small mission in this chapter is to obtain analytical solution of controller gain with flat 



Simple Robust Normalized PI Control for Controlled Objects with One-order Modelling Error 263 

gain curve in a band width as Butter-worse filter for the 3 rd order closed systems with 
one-order modelling errors and to show the robust property by loop gain margin for 
damping coefficients of nominal controlled objects and time constants of missing 
objects (sensor and signal conditioner) using Table Computation Tool (Excel: Microsoft 
Co. LTD). It is interesting and important historically that infinity time constant is 
contained in the investing set though it isn't existing actually. Moreover, we confirm the 
robustness for a parameter change by raising and lowering of step response using CAD 
Tool (Simulink: Mathworks Co. LTD). 

Risk of Integral term of PI controller when disconnecting the feedback line can be 
rescued by M/A station used in many industrial applications or by shutdown of the 
plant in our standing point. Then, we show a simple soft M/A station for simulation 
with PI controllers in appendix. 

This method is not actually because it becomes complicated to computation for higher 
order objects contained plants with lag time as pointed out in appendix but useful. 

2. System description 

In this section, a description of the higher order generalized system for later 2 nd order 
examples with one-order modeling error is presented although they may not computed 
concretely. 

2.1 Normalized transfer function 

In this section, how to normalize and why to normalize transfer functions are explained. 

The following transfer functions of controlled objects Eq. (1) with multiplicative one-order 
modeling error Eq. (2) are normalized using a general natural angular frequency co* n and 
gain K* = K K S as Eq. (3) although the three positions distributed for normalization are 
different. 



g(s)=k y\ 2< ™ : — T n 



(1) 



w s +2g i m ni s + w ni M s + ctj 



K. 



£S + l 



G l (s) = ^G(s)H(s) 



(3) 



es *»<« + p^y^ + - + /K 2r+ ^r< +i 



y 2r+q 

Moreover, converting the differential operator s to s as 



_ A S _ (A) 



the following normalized open loop transfer function is obtained: 

Gl ( J ) = —n + l n -n ^"T Where n = 2r+C l ( 5 ) 



264 Advances in Reinforcement Learning 

Neglecting one-order modeling error, the following normalized open loop transfer function is 
obtained: 

G{J) = — — ^— where n = 2r + q (6) 

s +Y n -x s ••• + r^+l 

2.2 State space models 

In this section, 3 kinds of description on normalized state space models are shown although 
they may not computed concretely. First shows a continuous realization form of the higher 
order transfer functions to a SISO system. Second shows a normalized sampled system form 
for the first continuous realization on sampling points. Third shows a normalized 
continuously approximated form using logarithm conversion for the second sampled 
system. 

Minimum realization of normalized transfer function: The normalized transfer function, 
shown in Eq. (6), is converted to the following SISO controllable minimum realization: 

x(t) = Ax(t) + bu(t) ry\ 

y(t) = cx(t) + du(t) 

Normalized sampled system on sampling points: Integrating the response between two 
sampling points to the next sampling point, the following precise sampled system is 
obtained: 

x((k + \)h) = e kh x(kh) - A" 1 [I - e kh ]bu(kh) 
y(kh) = cx(kh) + du(kh) 

Normalized sampled system approximated: 

Approximating Eq. (3) by the advanced difference method, the following sampled system is 
obtained: 

x(k + 1) = (I + Ah)x(k) + bhu(k) 
y(k) = cx(£) + du(k) 

Normalized System in continuous region: 

Returning to the continuous region after conversion using the matrix logarithm function, the 

following system is obtained in continuous region: 

x(t) = A*x(0 + b*t/(7) 

(10) 
y{t) = cx{t) + du{t) 

where A* = — ln(I + Ah) 
h 

= A-A 2 h + -A 3 h 2 --- (11) 

b* = (I-A/z + -A 2 /z 2 ---)b 



Simple Robust Normalized PI Control for Controlled Objects with One-order Modelling Error 265 

The condition of convergence for logarithm conversion Eq. (11) of controllable accompany 
description Eq. (7) is not described because it is assumed that the sampled time h is 
sufficiently small. The approximated order is then selected as the 9 th order. Thus, d = is 
assumed for the simplification. 

3. Controller and parameter tuning 

In this section, an IP controller and a number of parameter tuning methods are presented in 
order to increase the robustness of the control system. 

3.1 Normalized IP controller 

In this section, 3 kinds of description on normalized integral lead dependent type IP 
controller which is not conventional proportional lead dependent type PI controller are 
shown. First is showing inherent frequency for normalization as magnitudes of integral and 
proportional in continuous systems. Second is showing that in digital systems. Third is 
showing again that of digital systems in returning approximated continuous systems. 

C(s) = K i (^p) = K i CD n (-^ (12) 

s s co n 

c * = K.p(z-l + h/p) = K.p(z-l + ha)* n /p) 
z -\ z-\ 

C*(s) = K t (= + p) = ^*(- + 4) (14) 

s s co n 

Note that the digital IP controller of Eq. (13) is asymptotic to the proportional control as h 
approaches zero or p becomes larger. This controller is called IPL tuning. Then, the stable 
zero = - 1/ p must be placed not in the neighborhood of the system poles for safety. 

3.2 Stability of closed loop transfer function 

In this section, more higher order systems are processed for consideration generally on 
three tuning region classified by the amplitude of P control parameter using Hurwits 
approach in example of a second-order system with one-order modelling error. It is guessed 
that there may be four elementary tuning regions and six combinatorial tuning regions 
generally in the aspect of Hurwits stability. 

The following normalized loop transfer function is obtained from the normalized controlled 
object Eq. (5) and the normalized controller Eq. (12): 

WCs) = K i (l + W)(js+l)_ (15) 

es" +2 + p 2r+q s n+1 + - + fts 2 + (K t p + V)s + K, 

If the original parameters \/i,j,g j > 0,a > are positive, then \/k,fi k > • 
Assuming p > and K > 0, and that 

<p(s) 4 e F +2 + /? 2 , + r +1 + • • • + fts 2 + (K t p + Vjs + K, (16) 



266 Advances in Reinforcement Learning 

is a Hurwits polynomial, the stability limits of K can be obtained as a region of p . Then, 
this region is called a IPL region when p has a maximum lower bound and an IPO region 
when p =0. The region between zero and the minimum upper bound is called the IPS. The 
region between the minimum upper bound and the maximum lower bound is called the 
IPM region. Generally, there are four elementary regions and six combinatorial regions. 

3.3 Stationary points investing approach on fraction equation 

In this section, Stationary Points Investing approach on Fraction Equation for searching 

local maximum with equality restriction is shown using Lagrange's undecided multiplier 

approach. Then, multiple same solutions of the independent variable are solved at the 

stationary points. They can be used to check for mistakes in calculation as self-diagnostics 

approach. 

Here, the common normalized control parameters K and p will be obtained in continuous 

region, which has reduction models reduced from original region. 

Stationary Points Investing for Traction Equation approach for searching local maximum 

with equality restriction: 

\wumH^r ( 17 ) 

— » solve _ local _ muximum I minimum for co-co s 
such that |^(y<yj| = l 

This is the design policy of servo control for wide band width. In particular, \W(o)\ = 1 

means that steady state error is 0. 

Next, Lagrange's undecided multiplier approach is applied to obtain stationary points 

co with equality restriction using the above u,v notations. 

Then, the original problem can be converted to the following problem: 



u(co) 
v(bj) 
► solve local maximum I minimum 



J + (cd,A) = \W(jcof =^ + A{u(a))-v(w)} (18) 



where X is a Lagrange multiplier. 

The necessary conditions for obtaining the local minimum/ maximum of a new function 

become as folio wings. 



dJ + (a),A) 



dco 



U \0) s )v((O s ) - u(CQ s )V \C0 s ) 



v(^) 2 



+ X{u\bJ s )-v\bJ s )} = ( 19 ) 



dJ\co,X) 



The following relations are obtained from eqs. (19) and (20): 



u(w s )-v(w s ) = (20) 



Simple Robust Normalized PI Control for Controlled Objects with One-order Modelling Error 267 



u\co s ) = v\co s )or X- 



(21) 



(22) 



u{w s ) = v{w s ) 

Solutions of control parameters: 

Solving these simultaneous equations, the following functions can be obtained: 

K t J =g(* 9 p 9 &j) (y = 1,2,-, a) 

where co is the stationary points vector. 

Multiple solutions of K can be used to check for mistakes in calculation. 

3.4 Example of a second-order system with one-order modelling error 

In this section, an IP control system in continuous design for a second-order original 
controlled object without one-order sensor and signal conditioner dynamics is assumed for 
simplicity. The closed loop system with uncertain one-order modeling error is normalized 
and obtained the stable region of the integral gain in the three tuning region classified by 
the amplitude of P control parameter using Hurwits approach. Then, the safeness of the 
only I tuning region and the risk of the large P tuning region are discussed. Moreover, the 
analytic solutions of stationary points and double same integral gains are obtained using 
the Stationary Points Investing on Traction Equation approach for the gain curve of a 
closed loop system. 

Here, an IP control system for a second-order controlled object without sensor dynamics is 
assumed. 
Closed-loop transfer function: 

K n a>l 

(23) 



(24) 
(25) 
(26) 



w(i)=— T ( 27 ) 

el 4 +(2ge+\)I 3 +(s+2g)s 2 ^^p^^I^K, 

Stable conditions by Hurwits approach with four parameters: 

a. In the case of a certain time constant 
IPL&IPS Common Region: 





s + 


■ 2ga> n s + co 2 n 








H(s)-- 


ss + \ 






1 


-G(s)H(s)-. 


col 






K S K 


(£s + \)(s 2 +2ga) n 


s + a>; 


') 




_A S 


>£=0) n £ 








®n 










f,(l 


+ ps)(£l + \) 







268 Advances in Reinforcement Learning 

< K. < max[0, min[fc 2 , k 3 , oo]] (28) 

r A 2^ 2 + 2^ + l) 

^2 = — — (29) 

[p{4g 2 s + 2^ 2 + 2$- - s) - (2gs + 1) 2 ] (30) 



+ lp{4g l e+2ge l +2g-s}-{2gs+\yr 
k 3 A l + B^ 2 (, 2 + 2„ + l) (31) 

where p>0 for 4g 2 s + 2^ 2 + 2^ < ^ 
IPL, IPS Separate Region: 

The integral gain stability region is given by Eqs. (28) -(30). 

o< 2 (2 ^ + 2 1)2 *P (^) 

0<P< / , 2 _ (2 f_ + 2 1)2 , _ (PS) (32) 

4£- z £ + 2^r + 2^- - £ 

/or 4^- 2 ^ + 2^ 2 +2^--^>0 
It can be proven that k 3 >0 in the IPS region, and 

k 2 — » oo, k 3 — > oo w/zew /? — > (33) 



IPO Region: 



^2^+2^+1) ^ = Q 

(2^+l) 2 



The IPO region is most safe because it has not zeros, 
b. In the case of an uncertain positive time constant 
IPL&IPS Common Region: 



< K. < max[0, min[& 2 , k 3 ]] when p>0 (35) 

where k 3 (p, g, s p ) = 

for 4g 2 s+2gs 2 +2g<s (36) 

. T 4f(f + l) 7 _ t 

mm ^ 2 = _ — - when s = 1 (37) 

P 

IPL, IPS Separate Region: 

This region is given by Eq. (32). 



Simple Robust Normalized PI Control for Controlled Objects with One-order Modelling Error 269 



IPO Region: 



0<K t <2g(l-g 2 )\ ,— (38) 



£-=+oo,^>0.707 



w/w?« (f„ = ^^>0,0<c<0.707),p = 
p (l-2^ 2 ) b ^ 

c. Robust loop gain margin 

The following loop gain margin is obtained from eqs. (28) through (38) in the cases of certain 

and uncertain parameters: 

gm^ (39) 

where K UL is the upper limit of the stable loop gain K ■ 
Stable conditions by Hurwits approach with three parameters: 

The stability conditions will be shown in order to determine the risk of one order modelling 
error, 

0<K Z where p>— (PL) (40) 

< K { < 2q _ where 0<p<— (P0) (41) 

l-2q> 2q 

Hurwits Stability is omitted because h is sufficiently small, although it can be checked using 
the bilinear transform. 
Robust loop gain margin: 

gm = oo (PL _ region) (42) 

It is risky to increase the loop gain in the IPL region too much, even if the system does not 
become unstable because a model order error may cause instability in the IPL region. In the 
IPL region, the sensitivity of the disturbance from the output rises and the flat property of 
the gain curve is sacrificed, even if the disturbance from the input can be isolated to the 
output upon increasing the control gain. 
Frequency transfer function: 



\WU<5)\ = [- 



K f 2 {Uw 2 p 2 } 



(K. - Iqco 1 f +QJ 2 {\-a 2 + K t p} 2 
—> solve _ local _ maximum I minmum for co = co s ^ ' 

such that ^V(jc5 s )\ = \ 

When the evaluation function is considered to be two variable functions ( a> and k ) an d the 
stationary point is obtained, the system with the parameters does not satisfy the above 
stability conditions. 



270 



Advances in Reinforcement Learning 



Therefore, only the stationary points in the direction of co will be obtained without 
considering the evaluation function on K alone. 
Stationary points and the integral gain: 

Using the Stationary Points Investing for Fraction Equation approach based on Lagrange's 
undecided multiplier approach with equality restriction, the following two loop gain 
equations on x are obtained. Both identities can be used to check for miscalculation. 



K a =0.5{x 2 +2(2g 2 -\)x + \}l{2g + (x-\)p) 



K i2 = 0.5{3x 2 + 4(2^ 2 - l)x + 1} l{2g + (2x - \)p) 
where x = co 2 > 



(44) 
(45) 



Equating the right-hand sides of these equations, the third-order algebraic equation and the 
solutions for semi-positive stationary points are obtained as follows: 



x = 0,. = J 2 ^ - 1 X2g-^)_ 1 



(46) 



These points, which are called the first and second stationary points, call the first and second 
tuning methods, respectively, which specify the points for gain 1. 



4. Numerical results 

In this section, the solutions of double same integral gain for a tuning region at the 
stationary point of the gain curve of the closed system are shown and checked in some 
parameter tables on normalized proportional gains and normalized damping coefficients. 
Moreover, loop gain margins are shown in some parameter tables on uncertain time 
constants of one-order modeling error and damping coefficients of original controlled 
objects for some tuning regions contained with safest only I region. 





0.9 


0.95 


1.00 


1.05 


1.10 


0.8 


0.8612 


1.0496 


1.1892 


1.3068 


1.4116 


0.9 


0.6999 


0.9424 


1.0963 


1.2197 


1.3271 


1.0 


-99 


0.8186 


1.0000 


1.1335 


1.2457 


1.1 


-99 


0.6430 


0.8932 


1.0446 


1.1647 


1.2 


-99 


-99 


0.7598 


0.3480 


1.0812 



Table 1. co values for g and p in IPL tuning by the first tuning method 





0.9 


0.95 


1.00 


1.05 


1.10 


0.8 


0.7750 


1.0063 


1.2500 


1.5063 


1.7750 


0.9 


0.6889 


0.8944 


1.1111 


1.3389 


1.5778 


1.0 


1.2272 


0.8050 


1.0000 


1.2050 


1.4200 


1.1 


1.1077 


0.7318 


0.9091 


1.0955 


1.2909 


1.2 


1.0149 


1.0791 


0.8333 


1.0042 


1.1833 



Table 2. K a = K i2 values for g and p in IPL tuning by the first tuning method 



Simple Robust Normalized PI Control for Controlled Objects with One-order Modelling Error 



271 



Table 1 lists the stationary points for the first tuning method. Table 2 lists the integration 
gains (K. l =K i2 ) obtained by substituting Eq. (46) into Eqs. (44) and (45) for various 
damping coefficients. 
Table 3 lists the integration gains ( K n = K i2 ) for the second tuning method. 





0.9 


0.95 


1.00 


1.05 


1.10 


1.3 


1.0 


0.8333 


0.7143 


0.6250 


0.5556 


1.4 


1.250 


1.0 


0.8333 


0.7143 


0.6250 


1.5 


1.667 


1.250 


1.0 


0.8333 


0.7143 


1.6 


2.50 


1.667 


1.250 


1.0 


0.8333 


1.7 


5.00 


2.50 


1.667 


1.250 


1.0 



Table 3. K 1 = K i2 values for g and p in IPL tuning by the second tuning method 

Then, a table of loop gain margins (gm>\) generated by Eq. (39) using the stability limit 
and the loop gain by the second tuning method on uncertain s in a given region of £ for 
each controlled ^-by IPL (p =1.5) control is very useful for analysis of robustness. Then, the 
unstable region, the unstable region, which does not become unstable even if the loop gain 
becomes larger, and robust stable region in which uncertainty of the time constant, are 
permitted in the region of s~ . 

Figure 3 shows a reference step up-down response with unknown input disturbance in the 
continuous region. The gain for the disturbance step of the IPL tuning is controlled to be 
approximately 0.38 and the settling time is approximately 6 sec. 

The robustness on indicial response for the damping coefficient change of ±0.1 is an 
advantageous property. Considering Zero Order Hold, with an imperfect dead-time 
compensator using l st -order Pade approximation, the overshoot in the reference step 
response is larger than that in the original region or that in the continuous region. 



Ki = 1.0,p=1.5) by Second tuning for Normal 2nd Order Syst 



From: Referense 











zita=0.9(-0.1) 

- - zita=1.0(Nominal) 

zita=1.1(+0.1) 


i- 


!\ 




O 

\ 
I - 

1 

a 

v - 

i 

o 




i. 




1/ f 















zita=0.9(-0.1) 

zita=1.0(Nominal) 


\ 










(g = 1± 0. 1, K t = 1. 0,p = 1. 5, w n =\.005,g = l,s = 199.3, k = -0.0050674) 

Fig. 3. Robustness of IPL tuning for damping coefficient change. 

Then, Table 4 lists robust loop gain margins (gm > 1 ) using the stability limit by Eq.(37) and 
the loop gain by the second tuning method on uncertain £ in the region of (0.1 < ~s < 10) for 
each controlled g (>0.7) by IPL(p =1.5) control. The first gray row shows the area that is also 
unstable. 



272 



Advances in Reinforcement Learning 



Table 5 does the same for each controlled g (>0.4) by IPS(p =0.01). Table 6 does the same for 
each controlled g (>0.4) by IP0( p =0.0). 



eps/zita 


0.3 


0.7 


0.8 


0.9 


1 


1.1 


1.2 


0.1 


-2.042 


-1.115 


1.404 


5.124 


10.13 


16.49 


24.28 


0.2 


-1.412 


-0.631 


0.788 


2.875 


5.7 


9.33 


13.83 


1.5 


-0.845 


-0.28 


0.32 


1.08 


2 


3.08 


4.32 


2.4 


-1.019 


-0.3 


0.326 


1.048 


1.846 


2.702 


3.6 


3.2 


-1.488 


-0.325 


0.342 


1.06 


1.8 


2.539 


3.26 


5 


-2.128 


-0.386 


0.383 


1.115 


1.778 


2.357 


2.853 


10 


-4.596 


-0.542 


0.483 


1.26 


1.81 


2.187 


2.448 



Table 4. Robust loop gain margins on uncertain s in each region for each controlled g at WL 
(p-l-5) 



eps/zita 


0.4 


0.5 


0.6 


0.7 


0.8 


0.1 


1.189 


1.832 


2.599 


3.484 


4.483 


0.6 


1.066 


1.524 


2.021 


2.548 


3.098 


1 


1.097 


1.492 


1.899 


2.312 


2.729 


2.1 


1.254 


1.556 


1.839 


2.106 


2.362 


10 


1.717 


1.832 


1.924 


2.003 


2.073 



Table 5. Robust loop gain margins on uncertain s in each region for each controlled g at IPS 
(JJ=0.01) 



eps/zita 


0.3 


0.4 


0.5 


0.6 


0.7 


0.8 


0.9 


1 


0.1 


0.6857 


1.196 


1.835 


2.594 


3.469 


4.452 


5.538 


6.722 


0.4p).6556 


1.087 


1.592 


2.156 


2.771 


3.427 


4.118 


4.84 


0.5 


0.6604 


1.078 


1.556 


2.081 


2.645 


3.24 


3.859 


4.5 


0.6 


0.6696 


1.075 


1.531 


2.025 


2.547 


3.092 


3.655 


4.231 


1 


0.7313 


1.106 


1.5 


1.904 


2.314 


2.727 


3.141 


3.556 


2.1 


0.9402 


1.264 


1.563 


1.843 


2.109 


2.362 


2.606 


2.843 


10 


1.5722 


1.722 


1.835 


1.926 


2.004 


2.073 


2.136 


2.195 


9999 


1.9995 


2 


2 


2 


2 


2 


2 


2 



Table 6. Robust loop gain margins on uncertain £ in each region for each controlled g at IPO 
(p=0.0) 

These table data with additional points were converted to the 3D mesh plot as following 
Fig. 4. As IPO and IPS with very small p are almost equivalent though the equations differ 
quiet, the number of figures are reduced. It implies validity of both equations. 
According to the line of worst loop gain margin as the parameter of attenuation in the 
controlled objects which are described by gray label, this parametric stability margin (PSM) 
(Bhattacharyya S. P., Chapellat H., and Keel L. H., 1994) is classified to 3 regions in IPS and 
IPO tuning regions and to 4 regions in IPL tuning regions as shown in Fig.5. We may call the 



Simple Robust Normalized PI Control for Controlled Objects with One-order Modelling Error 



273 



larger attenuation region with more than 2 loop gain margin to the strong robust segment 
region in which region uncertainty time constant of one-order modeling error is allowed in 
the any region and some change of attenuation is also allowed. 




(c) p =0.5 



(d)^=0.01or0 



(a) p=1.5 (b)^=1.0 

Fig. 4. Mesh plot of closed loop gain margin 

Next, we call the larger attenuation region with more than y > 1 and less than 2 loop gain 
margin to the weak robust segment region in which region uncertainty time constant of 
one-order modeling error is only allowed in some region over some larger loop gain margin 
and some larger change of attenuation is not allowed. The third and the forth segment is 
almost unstable. Especially, notice that the joint of each segment is large bending so that the 
sensitivity of uncertainty for loop gain margin is larger more than the imagination. 




(a) p =1.5 (b) p =0.01 (c) p =0 (d) p =1.5, 1.0, 0.5, 0.01 

Fig. 5. The various worst lines of loop gain margin in a parameter plane (certain&uncertain) 

Moreover, the readers had to notice that the strong robust region and weak robust region 
of IPL is shift to larger damping coefficient region than ones of IPS and IPO. Then, this is 
also one of risk on IPL tuning region and change of tuning region from IPO or IPS to IPL 
region. 



5. Conclusion 

In this section, the way to convert this IP control tuning parameters to independent type PI 
control is presented. Then, parameter tuning policy and the reason adopted the policy on the 
controller are presented. The good and no good results, limitations and meanings in this 
chapter are summarized. The closed loop gain curve obtained from the second order example 
with one-order feedback modeling error implies the butter-worth filter model matching 
method in higher order systems may be useful. The Hardy space norm with bounded 
window was defined for I, and robust stability was discussed for MIMO system by an 
expanssion of small gain theorem under a bounded condition of closed loop systems. 



274 Advances in Reinforcement Learning 

We have obtained first an integral gain leading type of normalized IP controller to 
facilitate the adjustment results of tuning parameters explaining in the later. The 
controller is similar that conventional analog controllers are proportional gain type of PI 
controller. It can be converted easily to independent type of PI controller as used in recent 
computer controls by adding some converted gains. The policy of the parameter tuning is 
to make the norm of the closed loop of frequency transfer function contained one-order 
modeling error with uncertain time constant to become less than 1. The reason of selected 
the policy is to be able to be similar to the conventional expansion of the small gain 
theorem and to be possible in PI control. Then, the controller and uncertainty of the model 
becomes very simple. Moreover, a simple approach for obtaining the solution is proposed 
by optimization method with equality restriction using Lagrange's undecided multiplier 
approach for the closed loop frequency transfer function. 

The stability of the closed loop transfer function was investigated using Hurwits 
Criteria as the structure of coefficients were known though they contained uncertain 
time constant. 

The loop gain margin which was defined as the ratio of the upper stable limit of integral 
gain and the nominal integral gain, was investigated in the parameter plane of damping 
coefficient and uncertain time constant. Then, the robust controller is safe in a sense if 
the robust stable region using the loop gain margin is the single connection and changes 
continuously in the parameter plane even if the uncertain time constant changes larger 
in a wide region of damping coefficient and even if the uncertain any adjustment is 
done. Then, IPO tuning region is most safe and IPL region is most risky. 
Moreover, it is historically and newly good results that the worst loop gain margin as 
each damping coefficient approaches to 2 in a larger region of damping coefficients. 
The worst loop gain margin line in the uncertainty time constant and controlled objects 
parameters plane had 3 or 4 segments and they were classified strong robust segment 
region for more than 2 closed loop gain margin and weak robust segment region for 
more than y > 1 and less than 2 loop gain margin. Moreover, the author was presented 
also risk of IPL tuning region and the change of tuning region. 

It was not good results that the analytical solution and the stable region were 
complicated to obtain for higher order systems with higher order modeling error 
though they were easy and primary. Then, it was unpractical. 

6. Appendix 

A. Example of a second-order system with lag time and one-order modelling error 

In this section, for applying the robust PI control concept of this chapter to systems with 

lag time, the systems with one-order model error are approximated using Fade 

approximation and only the simple stability region of the integral gain is shown in the 

special proportional tuning case for simplicity because to obtain the solution of integral 

gain is difficult. 

Here, a digital IP control system for a second-order controlled object with lag time L without 

sensor dynamics is assumed. For simplicity, only special proportional gain case is shown. 

Transfer functions: 

Ke- Ls K(l-0.5Ls) Kco 2 n (l-0.5Ls) 

LriS) = = = — (A.l) 

Ts + l (Ts + \){Q.5Ls + \) s 2 +2gco n s + CQ 2 n v ' 



Simple Robust Normalized PI Control for Controlled Objects with One-order Modelling Error 275 

co n = . 1 ,^ = 0.5y0.5rZ{(r + 0.5Z)/(0.5rZ)} (A2) 

vo.srz 

H(s) = -^- (A3) 

ss + \ 

Normalized operation: 

The normalize operations as same as above mentioned are done as follows. 



— A S — A T 

s = — ,L = LfD n 
to. 



£ =£CD 



1 



C(s) = K t (± + p) C{s) =Kp H (U—p) 

S S ($ n 

Closed loop transfer function: 

The closed loop transfer function is obtained using above normalization as follows; 



_ s (g +2gj+l) 



W(s) 



es 4 + (2gs + l)s 3 + (s+2g- 0.5 2 L 2 K t )s 2 + s + K, 



(A4) 



0-0^ (A5) 

J 2 +2(;5"+l 



(A6) 



e.s +1 



^.A^L p = ±p (A8) 



(A9) 



s (eJ + \)(s 2 +2;s+1) ( a - 10 ) 

^.(l + ^)(l-0.5Z^)(ey + l) 

~^ 4 + (2 qE+ l)s 3 + (e+ 2q- 0.5pLK t )J 2 ^ (1+ Z.^- 0.5lZ.)7+ Z. 

z/ p = 0.5L fen 

£,.(!- 0.5Ls)(£s+l) (All) 



*/ ^ = z7zen 

^ ) = ^(1-0-5^X^ + 1) (A.12) 

ss 4 + (2gs + 1)5 3 + (J + 2g)s 2 + (1 - 0.5ZX,. )s + K t 



276 Advances in Reinforcement Learning 

Stability analysis by Hurwits Approach 

1. p < 0.5L, < K. < min { g+2 f , 1 } , $■ > 0, J > 

0.5pL (0.5L-p) 

{(2gs+\)(2g + s-Q.5 2 l}K i )-s}>K i (2gs+lf when p = 0.5L (A13) 

2 ^ 2+2 ^ + 1) 2 ^ >K t when p = 0.5L (A14) 

(2^ + l){(2^ + l) + 0.5 2 Z 2 } 

k3 < k2 then 

< K { < min{^|, 2 g (g 2 +2gg+l) ^^ = Q ^ (A15) 

0.5 2 T? {2gs +l){{2gs +1) + Q5 2 J}} 

In continuous region with one order modelling error, 

0<K.< 2g ^ when p = 0.5L (A16) 

' (l + o.s 2 ^) 

Analytical solution of Ki for flat gain curve using Stationary Points Investing for Fraction 

Equation approach is complicated to obtain, then it is remained for reader's theme. 

In the future, another approach will be developed for safe and simple robust control. 

B. Simple soft M/ A station 

In this section, a configuration of simple soft M/ A station and the feedback control system 

with the station is shown for a simple safe interlock avoiding dangerous large overshoot. 

B.l Function and configuration of simple soft M/ A station 

This appendix describes a simple interlock plan for an simple soft M/A station that has a 

parameter-identification mode (manual mode) and a control mode (automatic mode). 

The simple soft M/A station is switched from automatic operation mode to manual 

operation mode for safety when it is used to switch the identification mode and the control 

mode and when the value of Pv exceeds the prescribed range. This serves to protect the 

plant; for example, in the former case, it operates when the integrator of the PID controller 

varies erratically and the control system malfunctions. In the latter case, it operates when 

switching from P control with a large steady-state deviation with a high load to PI or PID 

control, so that the liquid in the tank spillovers. Other dangerous situations are not 

considered here because they do not fall under general basic control. 

There have several attempts to arrange and classify the control logic by using a case base. 

Therefore, the M/A interlock should be enhanced to improve safety and maintainability; 

this has not yet been achieved for a simple M/A interlock plan (Fig. Al). 

For safety reasons, automatic operation mode must not be used when changing into manual 

operation mode by changing the one process value, even if the process value recovers to an 

appropriate level for automatic operation. 

Semiautomatic parameter identification and PID control are driven by case-based data for 

memory of tuners, which have a nest structure for identification. 

This case-based data memory method can be used for reusing information, and preserving 

integrity and maintainability for semiautomatic identification and control. The semiautomatic 

approach is adopted not only to make operation easier but also to enhance safety relative to 

the fully automatic approach. 



Simple Robust Normalized PI Control for Controlled Objects with One-order Modelling Error 



277 



Notation in computer control (Fig. Bl, B3) 
Pv : Process value 
Av: Actual value 
Cv : Control value 
Mv : Manipulated value 
Sp: Set point 
A : Auto 
M : Manual 
T : Test 



Mv at manual mode 



Conditions 
OnPv 



■Pv' 



Conditions 

OnM/A 

Switch 



-► Integrated 
Switching 
-> Logic 



Self-holding 
Logic 



Switch 



* Mv at auto mode 



Mv 



Fig. Bl A Configuration of Simple Soft M/ A Station 

B.2 Example of a SISO system 

Fig. B2 shows the way of using M/ A station in a configuration of a SISO control system. 



r <*> ,/ 




Controller 


c(t) 


M/A 


u(t) 


Controlled 




*k 




Object 






1 






















L 




Sensor 
SC 


« ' 














zft 


) 







y(t) 



Fig. B2 Configuration of a IP Control System with a M/A Station for a SISO Controlled Object 
where the transfer function needed in Fig.B2 is as follows. 
1. Controlled Object: G(s) -- 



K 



Ts + 1 
Sensor & Signal Conditioner: G (s)- 

Controller: C(s) = 05K l2 (- + 0.5L) 

s 

Sensor Caribration Gain: 1/ K 



K c 



T c s + 1 



Normalized Gain before M/A Station: 1 / V05TL 

Normalized Gain after M/A Station: 1 / K 
Fig. B3 shows examples of simulated results for 2 kinds of switching mode when Pv 
becomes higher than a given threshold, (a) shows one to out of service and (b) does to 
manual mode. 
In former, Mv is down and Cv is almost hold. In latter, Mv is hold and Cv is down. 



278 



Advances in Reinforcement Learning 



Auto^ 



^Mv 



_._--—' ^\^M^f\ 



WJ*^ 



Manual 



Pv 



*v*^v^/\vva/v 



Av 



CV 




(a) Switching example from auto mode to (b) Switching example from auto mode to 
out of service by Pv High manual mode by Pv High 

Fig. B3 Simulation results for 2 kinds of switching mode 

C. New norm and expansion of small gain theorem 

In this section, a new range restricted norm of Hardy space with window(Kohonen T. r 1995) 

H™ is defined for I, of which window is described to notation of norm with superscript w, 

and a new expansion of small gain theorem based on closed loop system like general H™ 

control problems and robust sensitivity analysis is shown for applying the robust PI 

control concept of this chapter to MIMO systems. 

The robust control was aims soft servo and requested internal stability for a closed loop 

control system. Then, it was difficult to apply process control systems or hard servo systems 

which was needed strong robust stability without deviation from the reference value in the 

steady state like integral terms. 

The method which sets the maximum value of closed loop gain curve to 1 and the results of 

this numerical experiments indicated the above sections will imply the following new 

expansion of small gain theorem which indicates the upper limit of Hardy space norm of a 

forward element using the upper limit of all uncertain feedback elements for robust 

stability. 

For the purpose using unbounded functions in the all real domain on frequency like integral 

term in the forward element, the domain of Hardy norm of the function concerned on 

frequency is limited clearly to a section in a positive real one-order space so that the function 

becomes bounded in the section. 

Proposition 

Assuming that feedback transfer function H(s) (with uncertainty) are stable and the 

following inequality is holds, 



\\H(s 



<-,y>l 

Y 



(C-l) 



Moreover , if the negative closed loop system as shown in Fig.C-1 is stable and the following 
inequality holds, 



\\W(s 



G(s) 



1 + G(s)H{s) 



<1 



(C-2) 



Simple Robust Normalized PI Control for Controlled Objects with One-order Modelling Error 279 

then the following inequality on the open loop transfer function is hold in a region of 
frequency. 

|G(;oo)H(;co£ < -^,y > 1 for aye [oo min ,oo max ] (C-3) 

In the same feedback system, G(s) holds the following inequality in a region of frequency. 

N^C^^Y'^ 1 f° r «e[co min ,co max ] (C-4) 



His) 




Fig. C-l Configuration of a negative feed back system 

(proof) 

Using triangle inequality on separation of norms of summension and inequality on 
separation of norms of product like Helder's one under a region of frequency [co min ,co max ] , 
as a domain of the norm of Hardy space with window, the following inequality on the 
frequency transfer function of G(;co) is obtained from the assumption of the proposition. 



\\W(j a 



GO'co) 



l + G(;co)H(;co) 



<1 



|G(jco£ < ||1 + G(;co)H(;co)|£ < 1 + \\G(j<a)£ |H(;o 



HN* 



G(jm) 



(C-5) 



(C-6) 



if |H(;co£'<-<l,y>l then 



|g(;co)|I 



(C-7) 



y- 1 i-INHi: 

Moreover, the following inequality on open loop frequency transfer function is shown. 

1 



y- 



T H|G(7co)|I|H(/co)|:>|G(/co)H(/co)|: 



(C-; 



On the inequality of norm, the reverse proposition may be shown though the separation of 
product of norms in the Hardy space with window are not clear. The sufficient conditions 
on closed loop stability are not clear. They will remain reader's theme in the future. 



280 



Advances in Reinforcement Learning 



D. Parametric robust topics 

In this section, the following three topics (Bhattacharyya S. P., Chapellat H., and Keel L. H., 1994.) 

are introduced at first for parametric robust property in static one, dynamic one and stable one as 

assumptions after linearizing a class of non-linear system to a quasi linear parametric variable 

(QLPV) model by Taylar expansion using first order reminder term. (M.Katoh, 2010) 

1. Continuity for change of parameter 

Boundary Crossing Theorem 

1) fixed order polynomials P(A.,s) 

2) continuous polynomials with respect to one parameter A. on a fixed interval I=[a,b]. 

If P(a,s) has all its roots in S, P(b,s) has at least one root in U, then there exists at least 
one p in (a,b] such that: 

a) P(p,s) has all roots in S UdS 

b) P(p,s) has at least one root in dS 



P(a,s) 




P(p,s) 



Fig. D-l Image of boundary crossing theorem 

2. Convex for change of parameter 
Segment Stable Lemma 

Let define a segment using two stable polynomials as follows. 



S,(s)±AS,(s) + (l-A)S 2 (s) 

[^),^)]^»:^e[0,l]} 

where S 1 (s), S 2 (s) _ is _ polynomials _of _ deg ree _ n 
and _ stable _ with _ respect _to _S 



(D-l) 



Then, the followings are equivalent: 
a) The segment [S 1 (s),S 2 (s)] is stable with respect to S 
%(5>0, for _all _s* e dS; X e [0,1] 

3. Worst stability margin for change of parameter 

Parametric stability margin (PSM) is defined as the worst case stability margin within 
the parameter variation. It can be applied to a QLPV system of a class of non-linear 
system. There are non-linear systems such as becoming worse stability margin than 
linearized system although there are ones with better stability margin than it. There is a 
case which is characterized by the one parameter m which describes the injection rate of 
I/O, the interpolation rate of segment or degree of non-linearity. 

E. Risk and Merit Analysis 

Let show a summary and enhancing of the risk discussed before sections for safety in the following 

table. 



Simple Robust Normalized PI Control for Controlled Objects with One-order Modelling Error 



281 



Kinds 


Evaluation of influence 


Countermeasure 


1) Disconnection of 
feedback line 

2) Overshoot over limit 
value 


1) Spill-over threshold 

2) Attack to weak material 


Auto change to manual 
mode by M/ A station 
Auto shut down 


Change of tuning region 
from IPS to IPL by making 
proportional gain to large 


Grade down of stability 
region from strong or weak 
to weak or un-stability 


Use IPO and not use IPS 
Not making proportional 
gain to large in IPS tuning 
region 


Change of damping 
coefficient or inverse of 
time constant over weak 
robust limit 


Grade down of stability 
region from strong or weak 
to weak or un-stability 


Change of tuning region 
from IPL to IPS or IPO 



Table E-l Risk analysis for safety 

It is important to reduce risk as above each one by adequate countermeasures after 
understanding the property of and the influence for the controlled objects enough. 

Next, let show a summary and enhancing of the merit and demerit discussed before sections for 
robust control in the following table, too. 



Kinds 


Merit 


Demerit 


1) Steady state error is 
vanishing as time by effect 
of integral 


1) It is important property 
in process control and hard 
servo area 


It is dislike property in soft 
servo and robot control 
because of hardness for 
disturbance 


There is a strong robust 
stability damping region in 
which the closed loop gain 
margin for any uncertainty 
is over 2 and almost not 
changing. 


It is uniform safety for 
some proportional gain 
tuning region and changing 
of damping coefficient. 
For integral loop gain 
tuning, it recommends the 
simple limiting sensitivity 
approach. 


1) Because the region is 
different by proportional 
gain, there is a risk of grade 
down by the gain tuning. 


There is a weak robust 
stability damping region in 
which the worst closed loop 
gain margin for any 
uncertainty is over given 
constant. 


1) It can specify the grade 
of robust stability for any 
uncertainty 


1) Because the region is 
different by proportional 
gain, there is a risk of grade 
down by the gain tuning. 
It is different safety for 
some proportional gain 
tuning region. 



Table E-2 Merit analysis for control 

It is important to apply to the best application area which the merit can be made and the 
demerit can be controlled by the wisdom of everyone. 



282 Advances in Reinforcement Learning 

7. References 

Bhattacharyya S. P., Chapellat H., and Keel L. H.(1994). Robust Control, The Parametric 
Approach, Upper Saddle River NJ07458 in USA: Prentice Hall Inc. 

Katoh M. and Hasegawa H., (1998). Tuning Methods of 2 nd Order Servo by I-PD Control 
Scheme, Proceedings of The 41st Joint Automatic Control Conference, pp. 111-112. (in 
Japanese) 

Katoh M.,(2003). An Integral Design of A Sampled and Continuous Robust Proper 
Compensator, Proceedings ofCCCT2003, (pdf 000564), Vol. Ill, pp. 226-229. 

Katoh M.,(2008). Simple Robust Normalized IP Control Design for Unknown Input 
Disturbance, SICE Annual Conference 2008, August 20-22, The University Electro- 
Communication, Japan, pp.2871-2876, No. :PR0001/ 08/ 0000-2871 

Katoh M., (2009). Loop Gain Margin in Simple Robust Normalized IP Control for Uncertain 
Parameter of One-Order Model Error, International Journal of Advanced Computer 
Engineering, Vol.2, No.l, January-June, pp.25-31, ISSN:0974-5785, Serials 
Publications, New Delhi (India) 

Katoh M and Imura N., (2009). Double-agent Convoying Scenario Changeable by an 
Emergent Trigger, Proceedings of the 4 th International Conference on Autonomous 
Robots and Agents, Feb 10-12, Wellington, New Zealand, pp.442-446 

Katoh M. and Fujiwara A., (2010). Simple Robust Stability for PID Control System of an 
Adjusted System with One-Changeable Parameter and Auto Tuning, International 
Journal of Advanced Computer Engineering, Vol.3, No.l, ISSN:0974-5785, Serials 
Publications, New Delhi (India) 

Katoh M.,(2010). Static and Dynamic Robust Parameters and PI Control Tuning of TV-MITE 
Model for Controlling the Liquid Level in a Single Tank", TC01-2, SICE Annual 
Conference 2010, 18/ August TC01-3 

Krajewski W., Lepschy A., and Viaro U.,(2004). Designing PI Controllers for Robust Stability 
and Performance, Institute of Electric and Electronic Engineers Transactions on Control 
System Technology, Vol. 12, No. 6, pp. 973- 983. 

Kohonen T.,(1995, 1997). Self-Organizing Maps, Springer 

Kojori H. A., Lavers J. D., and Dewan S. B.,(1993). A Critical Assessment of the Continuous- 
System Approximate Methods for the Stability Analysis of a Sampled Data System, 
Institute of Electric and Electronic Engineers Transactions on Power Electronics, Vol. 8, 
No. 1, pp. 76-84. 

Miyamoto S.,(1998). Design of PID Controllers Based on Hoc-Loop Shaping Method and LMI 
Optimization, Transactions of the Society of Instrument and Control Engineers, Vol. 34, 
No. 7, pp. 653-659. (in Japanese) 

Namba R., Yamamoto T., and Kaneda M., (1998). A Design Scheme of Discrete Robust PID 
Control Systems and Its Application, Transactions on Electrical and Electronic 
Engineering, Vol. 118-C, No. 3, pp. 320-325. (in Japanese) 

Olbrot A. W. and Nikodem M.,(1994) . Robust Stabilization: Some Extensions of the Gain 
Margin Maximization Problem, Institute of Electric and Electronic Engineers 
Transactions on Automatic Control, Vol. 39, No. 3, pp. 652- 657. 

Zbou K. with Doyle F. C. and Glover K.,(1996). Robust and Optimal Control, Prentice Hall Inc. 

Zhau K. and Khargonekar P.P., (1988). An Algebraic Riccati Equation Approach to H 
Optimization, Systems & Control Letters, 11, pp.85-91. 



13 



Passive Fault Tolerant Control 

M. Benosman 
Mitsubishi Electric Research Laboratories 201 Broadway street, Cambridge, MA 02139* 

USA 



1. Introduction 

Today, as a result of increasing complexity of industrial automation technologies, fault 
handling of such automatic systems has become a challenging task. Indeed, although 
industrial systems are usually designed to perform optimally over time, performance 
degradation occurs inevitably These are due, for example, to aging of system components, 
which have to be monitored to prevent system-wide failures. Fault handling is also necessary 
to allow redesign of the control in such a way to recover, as much as possible, an optimal 
performance. To this end, researchers in the systems control community have focused on 
a specific control design strategy, called Fault tolerant control (FTC). Indeed, FTC is aimed 
at achieving acceptable performance and stability for the safe, i.e. fault-free system as well 
as for the faulty system. Many methods have been proposed to deal with this problem. 
For survey papers on FTC, the reader may refer to (5; 38; 53). While the available schemes 
can be classified into two types, namely passive and active FTC (53), the work presented 
here falls into the first category of passive FTC. Indeed, active FTC is aimed at ensuring 
the stability and some performance, possibly degraded, for the post-fault model, and this 
by reconfiguring on-line the controller, by means of a fault detection and diagnosis (FDD) 
component that detects, isolates and estimates the current fault (53). Contrary to this active 
approach, the passive solution consists in using a unique robust controller that, will deal 
with all the expected faults. The passive FTC approach has the drawback of being reliable 
only for the class of faults expected and taken into account in the design. However, it has 
the advantage of avoiding the time delay required in active FTC for on-line fault diagnosis 
and control reconfiguration (42; 54), which is very important in practical situations where 
the time window during which the faulty system stay stabilizable is very short, e.g. the 
unstable double inverted pendulum example (37). In fact, in practical applications, passive 
FTC complement active FTC schemes. Indeed, passive FTC schemes are necessary during 
the fault detection and estimation phase (50), to ensure the stability of the faulty system, 
before switching to active FTC. Several passive FTC methods have been proposed, mainly 
based on robust theory, e.g. multi-objective linear optimization and LMIs techniques (25), QFT 
method (47; 48), Hoo (36; 37), absolute stability theory (6), nonlinear regulation theory (10; 11), 
Lyapunov reconstruction (9) and passivity-based FTC (8). As for active FTC, many methods 
have been proposed for active linear FTC, e.g. (19; 29; 43; 46; 51; 52), as well as for nonlinear 
FTC, e.g. (4; 7; 13; 14; 20; 21; 28; 32-35; 39; 41; 49). 
We consider in this work the problem of fault tolerant control for failures resulting from loss of 



* E-mail: benosman@merl.com 



284 Robust Control, Theory and Applications 

actuator effectiveness. FTCs dealing with actuator faults are relevant in practical applications 
and have already been the subject of many publications. For instance, in (43), the case 
of uncertain linear time-invariant models was studied. The authors treated the problem 
of actuators stuck at unknown constant values at unknown time instants. The active FTC 
approach they proposed was based on an output feedback adaptive method. Another active 
FTC formulation was proposed in (46), where the authors studied the problem of loss 
of actuator effectiveness in linear discrete-time models. The loss of control effectiveness 
was estimated via an adaptive Kalman filter. The estimation was complemented by a fault 
reconfiguration based on the LQG method. In (30), the authors proposed a multiple-controller 
based FTC for linear uncertain models. They introduced an active FTC scheme that ensured 
the stability of the system regardless of the decision of FDD. 

However, as mentioned earlier and as presented for example in (50), the aforementioned 
active schemes will incur a delay period during which the associate FDD component will have 
to converge to a best estimate of the fault. During this time period of FDD response delay, 
it is essential to control the system with a passive fault tolerant controller which is robust 
against actuator faults so as to ensure at least the stability of the system, before switching to 
another controller based on the estimated post-fault model, that ensures optimal post-fault 
performance. In this context, we propose here passive FTC schemes against actuator loss 
of effectiveness. The results presented here are based on the work of the author introduced 
in (6; 8). We first consider linear FTC and present some results on passive FTC for loss of 
effectiveness faults based on absolute stability theory. Next we present an extension of the 
linear results to some nonlinear models and use passivity theory to write nonlinear fault 
tolerant controllers. In this chapter several controllers are proposed for different problem 
settings: a) Linear time invariant (LTI) certain plants, b) uncertain LTI plants, c) LTI models 
with input saturations, d) nonlinear plants affine in the control with single input, e) general 
nonlinear models with constant as well as time-varying faults and with input saturation. We 
underline here that we focus in this chapter on the theoretical developments of the controllers, 
readers interested in numerical applications should refer to (6; 8). 

2. Preliminaries 

Throughout this chapter we will use the L^ norm denoted ||.||, i.e. for x G R" we define 
| |x| | = Vx T x. The notation Lfh denotes the standard Lie derivative of a scalar function h(.) 
along a vector function /(.). Let us introduce now some definitions from (40), that will be 
frequently used in the sequel. 

Definition 1 ((40), p.45): The solution x(t,xo) of the system x = f(x), x <G IR n , / locally 
Lipschitz, is stable conditionally to Z, if xq £ Z and for each e > there exists 5(e) > 
such that 

\\xo - xo\\ < 5 and x Q e Z => \\x(t,x Q ) - x(t,x )\\ < e, Vt > 0. 

If furthermore, there exist t(xq) > 0, s.t. ||x(f,Xo) — x(*/*o)ll =^ 0/ V| |xq — Xrjll < 
t(xq) and xq G Z, the solution is asymptotically stable conditionally to Z. If v(xq) —> oo, 
the stability is global. 

Definition 2 ((40), p.48): Consider the system H : x = f(x,u), y = h(x,u), x e W 1 , u,y G IR m , 
with zero inputs, i.e. x = f(x, 0), y = h(x, 0) and let Z C IR n be its largest positively invariant 
set contained in {x £ IR n |y = h(x,0) = 0}. We say that H is globally zero-state detectable 
(GZSD) if x = is globally asymptotically stable conditionally to Z. If Z = {0}, the system H 
is zero-state observable (ZSO). 



Passive Fault Tolerant Control 285 

Definition 3 ((40), p. 27): We say that H is dissipative in X C IR H containing x = 0, if there exists 
a function S(x), S(0) = such that for all x G X 

S(x) >0andS(x(T))-S(x(0)) < f co(u(t),y(t))dt, 

for all u G U C IR™ and all T > such that *(*) G X, V t G [0,T]. Where the function 
co : IR m x R OT — > IR called the supply rate, is locally integrable for every u G IT, i.e. 
J^ 1 |a;(tt(£),i/(£))|d£ < oo, V *o < fi. S is called the storage function. If the storage function is 
differentiable the previous conditions writes as 

S{x{t))<co{u{t),y{t)). 

The system H is said to be passive if it is dissipative with the supply rate w(u,y) = u T y. 

Definition 4 ((40), p. 36): We say that H is output feedback passive (OFP(jo)) if it is dissipative 

with respect to co(u,y) = u T y — py T y for some p G IR. 

We will also need the following definition to study the case of time-varying faults in Section 

8. 

Definition 5 (24): A function x : [0, oo) — > ]R n is called a limiting solution of the system x = 

f(t, x), f a smooth vector function, with respect to an unbounded sequence t n in [0, oo), if there 

exist a compact k C IR n and a sequence {x n : [t n ,oo) — ► k} of solutions of the system such 

that the associated sequence {x n :—> x n (t + t n )} converges uniformly to x on every compact 

subset of [0, oo). 

Also, throughout this paper it is said that a statement P(t) holds almost everywhere (a.e.) if the 

Lebesgue measure of the set {t G [0, oo) \P(t) is false} is zero. We denote by df the differential 

of the function / : IR n — > IR. We also mean by semiglobal stability of the equilibrium point 

x° for the autonomous system x = f(x), x G IR n with / a smooth function, that for each 

compact set K c IR n containing x°, there exist a locally Lipschitz state feedback, such that x° 

is asymptotically stable, with a basin of attraction containing K ((44), Definition 3, p. 1445). 

3. FTC for known LTI plants 

First, let us consider linear systems of the form 

x = Ax + Bocu, (1) 

where, x G IR n , u G IR m are the state and input vector, respectively, and oc G ]R mxm is a diagonal 
time variant fault matrix, with diagonal elements ocu(t), i = 1, ..., m s.t., < e\ < otu(t) < 1. 
The matrices A, B have appropriate dimensions and satisfy the following assumption. 
Assumption(l): The pair (A,B) is controllable. 

3.1 Problem statement 

Find a state feedback controller u(x) such that the closed-loop controlled system (1) admits x = as a 
globally uniformly asymptotically (GU A) stable equilibrium point \/oc(t) (s.t. < e\ < ocu(t) < 1). 

3.2 Problem solution 

Hereafter, we will re-write the problem of stabilizing (1), for \/oc(t) s.t., < e\ < ocu(t) < 1, as 
an absolute stability problem or Lure's problem (2). Let us first recall the definition of sector 
nonlinearities. 



286 



Robust Control, Theory and Applications 



Definition 6 ((22), p. 232): A static function tp : [0,oo) xT^ R m ,s.t. [ip(t,y) -Kiy] T [ip(t,y) - 

K 2 y] < 0, V(f,y), with X = K 2 - K x = K T > 0, where K x = diag(kl lf ...,kl m ), K 2 = 

diag{kl\, ...,kl m ), is said to belong to the sector \K\, K 2 \. 

We can now recall the definition of absolute stability or Lure's problem. 

Definition 7 (Absolute stability or Lure's problem (22), p. 264): We assume a linear system of the 

form 

x = Ax + Bu 

y = Cx + Du (2) 

u = -ip(t,y), 

where, x G IR n , u G IR m , y G IR m , (A,B) controllable, (A,C) observable and ip : [0,oo) x 

R m —> R m is a static nonlinearity, piecewise continuous in t, locally Lipschitz in y and satisfies 

a sector condition as defined above. Then, the system (2) is absolutely stable if the origin 

is GUA stable for any nonlinearity in the given sector. It is absolutely stable within a finite 

domain if the origin is uniformly asymptotically (UA) stable within a finite domain. 

We can now introduce the idea used here, which is as follows: 

Let us associate with the faulty system (1) a virtual output vector y G IR OT 



(3) 



(4) 



(5) 



x = Ax + Bocu 
y = Kx, 

and let us write the controller as an output feedback 

u= -y. 

From (3) and (4), we can write the closed-loop system as 

x = Ax + Bv 
y = Kx 
v= -cc(t)y. 

We have thus transformed the problem of stabilizing (1), for all bounded matrices oc(t), to the 

problem of stabilizing the system (5) for all oc(t). It is clear that the problem of GUA stabilizing 

(5) is a Lure's problem in (2), with the linear time varying stationarity tp(t,y) = oc(t)y, and 

where the 'nonlinearities' admit the sector bounds K\ = &iag(e\, ...,e{), K 2 = I mX m- 

Based on this formulation we can now solve the problem of passive fault tolerant control of 

(1) by applying the absolute stability theory (26). 

We can first write the following result: 

Proposition 1: Under Assumption 1, the closed-loop of (1) with the static state feedback 



-Kx, 



(6) 



where K is solution of the optimal problem 



ki: 



min(Ep*E=a., ; 

K-ij 

pA(k) + A t (k)p (c t - 
((c T - pfyw- 1 ) 7 -I 

P>0 

K 
KA 



PB)W 



-H 



<0 



rank 



KA n ~ 



(7) 



Passive Fault Tolerant Control 



287 



for P = P T > 0, W = (D + D T ) 05 and {A(K), B(K), C(K), D(K)} is a minimal realization 
of the transfer matrix 



-lj 



-l 



(8) 



G=[I + K{sl - A)~ 1 B] [I + e x x I mxm K(sI - A) 

admits the origin x = as GUA stable equilibrium point. 

Proof: We saw that the problem of stabilizing (1) with a static state feedback u = — Kx is 
equivalent to the stabilization of (5). Studying the stability of (5) is a particular case of Lure's 
problem defined by (2), with the 'nonlinearity' function ip(t,y) = — oc(t)y associated with 
the sector bounds K\ = e\ x l mX m r ^2 = txm (introduced in Definition 1). Then based on 
Theorem 7.1, in ((22), p. 265), we can write that under Assumptionl and the constraint of 
observability of the pair (A, K), the origin x = is GUA stable equilibrium point for (5), if the 
matrix transfer function 

G=[I + G(s)][I + e 1 xI mxm G(s)]-\ 

where G(s) = K(s I — A)~ 1 B / is strictly positive real (SPR). Now, using the KYP lemma as 
presented in (Lemma 6.3, (22), p. 240), we can write that a sufficient condition for the GUA 
stability of x = along the solution of (1) with u = — Kx is the existence of P = P T > 0, L and 
W, s.t. 

PA(K) + A T (K)P = -L T L - eP, e > 

PB(K) = C T (K)-L T W (9) 

W T W = D(K) + D T (K), 

where, {A, B, C, D} is a minimal realization of G. Finally, adding to equation (9), the 
observability condition of the pair (A, K), we arrive at the condition 



PA(K) + A T (K)P= -L T L 
PB(K) = C T (K)-L T W 
W T W = D(K) + D T (K) 
K 
KA 



■eP, e > 



rank 



KA 



n-\ 



(10) 



Next, if we choose W = W T we can write W = (D + D T ) 05 . The second equation in (10) leads 
to L T = (C T — PB)^ -1 . Finally, from the first equation in (10), we arrive at the following 
condition on P 

PA(K) + A T (K)P + (C T - PfyW^dC 7 - PB)^- 1 ) 1 < 0, 

which is in turn equivalent to the LMI 

PBW- 1 ' 



pA(k) + A t (k)p (c t ■ 

((C T -PB)W- l ) T -I 



<0. 



(11) 



Thus, to solve equation (10) we can solve the constrained optimal problem 



288 



Robust Control, Theory and Applications 



mm(EfcrEp"fc|0 



PA(K) + A T (K)P (C^-PBjW" 1 
(((^-P^W" 1 ) 7 -I 
P>0 



<0 



rank 



KA 



KA 



n-l 



□ 



(12) 



Note that the inequality constraints in (7) can be easily solved by available LMI algorithms, e.g. 
feasp under Matlab. Furthermore, to solve equation (10), we can propose two other different 
formulations: 

1. Through nonlinear algebraic equations: Choose W = W T which implies by the third 
equation in (10) that W = (D(K) + D T (X)) - 5 , for any K s.t. 



PA(K) + A T (K)P- 



-L l L-eP, e>0, P = P 1 > 



PB(K) = C T (K)-L T W 
K 
KA 



rank 



KA 



n-l 



(13) 



To solve (13) we can choose e = e 2 and P = P T P, which leads to the nonlinear algebraic 
equation 

F(k ij ,p ij ,l ij ,e)=0, (14) 

where k(j, i = 1, ...,m, j = 1, ...n, pij, i = 1, ...,n (A e IR nxn ), j = 1, ...n and l^, i = 
1, ...,m, j = 1, ...ft are the elements of K, P and L, respectively. Equation (14) can then be 
resolved by any nonlinear algebraic equations solver, e.g.fsolve under Matlab. 

2. Through Algebraic Riccati Equations (ARE): It is well known that the positive real lemma 
equations, i.e. the first three equations in (10) can be transformed to the following ARE ((3), 
pp. 270-271): 



vr D -ifiT 



DP" 1 dTt 



*>Tt>-1, 



P(A-BR- l C) + (A L - C 1 R'^ 1 )P + PBR'^B 1 P + C 1 R~ l C = t 



(15) 



where A = A + Q5e.I n xn, R = D(K) + D T (K) > 0. Then, if a solution P = P T > is 
found for (15) it is also a solution for the first three equation in (10), together with 

W = -VR in , L= (PB- C T )R~ 1/2 V T , VV T = I. 

To solve equation (10), we can then solve the constrained optimal problem 



Passive Fault Tolerant Control 289 



min(Efcr^Ii^) 



P>0 

K 

KA 
rank 



(16) 



_KA n ~ 

where P is the symmetric solution of the ARE (15), that can be directly computed by 
available solvers, e.g. care under Matlab. 

There are other linear controllers for LPV system, that might solve the problem stated in 
Section 3. 1 e.g. (1). However, the solution proposed here benefits from the simplicity of the 
formulation based on the absolute stability theory, and allows us to design FTCs for uncertain 
and saturated LTI plants, as well as nonlinear affine models, as we will see in the sequel. 
Furthermore, reformulating the FTC problem in the absolute stability theory framework may 
be applied to solve the FTC problem for several other systems, like infinite dimensional 
systems, i.e. PDEs models, stochastic systems and systems with delays (see (26) and the 
references therein). Furthermore, compared to optimal controllers, e.g. LQR, the proposed 
solution offers greater robustness, since it compensates for the loss of effectiveness over 
[e\, 1]. Indeed, it is well known that in the time invariant case, optimal controllers like LQR 
compensates for a loss of effectiveness over [1/2, 1] ((40), pp. 99-102). A larger loss of 
effectiveness can be covered but at the expense of higher control amplitude ((40), Proposition 
3.32, p. 100), which is not desirable in practical situations. 
Let us consider now the more practical case of LTI plants with parameter uncertainties. 

4. FTC for uncertain LTI plants 

We consider here models with structured uncertainties of the form 

x = {A + AA)x + (B + AB)<xu, (17) 

where AAeoA = {AA e K nxn \AA min < AA < AA max , AA min ,AA max e IR nxn }, 

AB e oB = {AB e 7R nxm \AB min < AB < AB maX/ AB min ,AB max e R" xm }, 

oc = diag(oc\\, ..., oc mm ) , < e\ < oca < 1 Vz G {1, ...,m}, and A, B, x, u as defined before. 

4.1 Problem statement 

Find a state feedback controller u(x) such that the closed-loop controlled system (17) admits x = as a 
globally asymptotically (G A) stable equilibrium point \/oc(s.t. < e\ < oca < 1), VAA e oA, AB e 
oB. 

4.2 Problem solution 

We first re-write the model (17) as follows: 

x = (A + AA)x + (B + AB)v 

y = Kx (18) 

v = —ocy. 

The formulation given by (18), is an uncertain Lure's problem (as defined in (15) for example). 
We can write the following result: 



290 Robust Control, Theory and Applications 

Proposition 2: Under Assumption 1, the system (17) admits x = as GA stable equilibrium 
point, with the static state feedback u = —KH~ 1 x, where K, H are solutions of the LMIs 

Q + HA T - K T L T B T + AH- BLK < VL e L v , Q = Q T > 0, H > 

-Q + HAA 7 -K T L T AB T + AAH-ABLK<0, V(AA,AB,L) G oA v x o&° xL v , { ' 

where, L v is the set containing the vertices of {e\l mxm , I m xm}, and oA v , oB v are the set of 
vertices of oA, oB respectively. 

Proof: Under Assumption 1, and using Theorem 5 in ((15), p. 330), we can write the stabilizing 
static state feedback u = —Kx, where K is such that, for a given H > 0, Q = Q T > we have 



Q + (A - BLK) T H + H(A - BLK) <0\/LeL v 

-Q+((AA-ABLK) T H + H(AA-ABLK)) <0V(AA,AB,L) e oA v x oB° x L u , 



where, L v is the set containing the vertices of {e\l m ^ m , I mX m}, and oA v , oB v are the set of 
vertices of oA, oB respectively Next, inequalities (20) can be transformed to LMIs by defining 
the new variables K = KH -1 , H = H _1 , Q = H~ 1 QH~ 1 and multiplying both sides of the 
inequalities in (20) by H _1 , we can write finally (20) as 

Q + HA T - K T L T B T + AH- BLK < VL e L v , Q = Q T > 0, H > 

-Q + HAA T - K T L T AB T + AAH - ABLK < V(AA, AB,L) G oA v x o&° xL v , { ' 

the controller gain will be given by K = XH _1 .D 

Let us consider now the practical problem of input saturation. Indeed, in practical applications 
the available actuators have limited maximum amplitudes. For this reason, it is more realistic 
to consider bounded control amplitudes in the design of the fault tolerant controller. 

5. FTC for LTI plants with control saturation 

We consider here the system (1) with input constraints \iij\ < u maXi , i = 1, •••, m, and study the 
following FTC problem. 

5.1 Problem statement 

Find a bounded feedback controller, i.e. \u{\ < u maXi , i = 1, •••, m, such that the closed-loop controlled 
system (1) admits x = as a uniformly asymptotically (UA) stable equilibrium point \/oc(t) (s.t. < 
e\ < ocii(t) < 1), i = 1, ...,m, within an estimated domain of attraction. 

5.2 Problem solution 

Under the actuator constraint \u{\ < u maXi , i = 1, ...,m, the system (1) can be re-written as 

x = Ax + BU max v 

y = Kx (22) 

v = —oc(t)sat(y), 

where U max = diag(u maXl/ ...,u m ax m ), sat(y) = (saf(y 1 ),...,sflf(y m )) T , sat(yi) = 

sign(yi)min{l,\yi\}. 

Thus we have rewritten the system (1) as a MIMO Lure's problem with a generalized sector 

condition, which is a generalization of the SISO case presented in (16). 

Next, we define the two functions tpi : IR n — > IR m , tf\(x) = —e\l mxm sat(Kx) and 



Passive Fault Tolerant Control 291 

xp 2 : IR n -> IR m , t^ 2 (x) = -sat(Kx). 

We can then write that v is spanned by the two functions tpi, xp2'. 

v(x,t) e co{xp 1 (x),xp 2 (x)}, \/x eR n ,t e R, (23) 

where co{ipi(x), ^i{x)} denotes the convex hull of xpi, tp2, i.e. 

i=2 i=2 

co{xp 1 (x) / xp 2 (x)} := {£7i(0*iW/ E^'W = l > Ti(0 > W}. 

i=l z=l 

Note that in the SISO case, the problem of analyzing the stability of x = for the system (22) 

under the constraint (23) is a Lure's problem with a generalized sector condition as defined in 

(16). 

Let us recall now some material from (16; 17), that we will use to prove Proposition 4. 

Definition 8 ((16), p.538): The ellipsoid level set e(P,p) := {x e IR n : V(x) = x T Px < p}, p > 

0, P = P T > is said to be contractive invariant for (22) if 

V = 2x T P(Ax - BU max ocsat(Kx)) < 0, 

forallxG£(P,p)\{0}, VfeR. 

Proposition 3 ((26), P. 539): An ellipsoid e(P,p) is contractively invariant for 

x = Ax + Bsat(Fx), B G R nxl 

if and only if 

(A + BF) T P + P(A + BF) < 0, 

and there exists an H £ R lxn such that 

(A + BH) T P + P(A + BH) < 0, 

and e(P,p) c{xeK N : \Fx\ < 1}. 

Fact 1 ((16), p. 539): Given a level set Ly(p) = {x e IR n / V(x) < p} and a set of functions 

ipi(u), i e {1,...,N}. Suppose that for each i e {1, ..., N}, Ly(p) is contractively invariant 

for x = Ax + Btpi(u). Let tp(u,t) G co{\pi(u), i e {1,...,N}} for all w,f G IR, then Ly(p) is 

contractively invariant for x = Ax + Bxp(u, t). 

Theorem 1((17), p. 353): Given an ellipsoid level set z(P,p), if there exists a matrix H G K mxn 

such that 

(A + BM(v,K,H)) T P + P(A + BM(v,K,H)) < 0, 

for all 1 e G V := {z? G IR n |z? z - = 1 or 0}, and e(P,p) C £(H) := {x G R N : |/z z -x| < 1, i = 

1, ...,m}, where 

^l + (l-t?i)fei 
M(r; / jK / H)= : 

_v m k m + (1 — v m )h m 

then s(P,p) is a contractive domain for x = Ax + Bs«f(Xx). 

We can now write the following result: 

Proposition 4: Under Assumption 1, the system (1) admits x = as a UA stable equilibrium 



(24) 



1 Hereafter, hi, kj denote the zth line of H, K, respectively. 



292 



Robust Control, Theory and Applications 



point, within the estimated domain of attraction e(P,p), with the static state feedback u 
Kx = YQ~ 1 x / where Y, Q solve the LMI problem 



> 0, / > 



«*/q>o,y,gJ 
7* I' 
I Q 

QA T + AQ + M(v, Y, G) T (BU n 

QA T + AQ + M(v,Y,G) T (BU n 
1 gi 



g Q 



> 0, i 



oc e ) T + (BU max ct e )M(v,Y,G) < 0, Vz? e V (25) 

) T + (BUmax)M(v,Y,G) < 0, \/v e V 



e\ x Imxm, M given by (24), P = pQ ^ and 



where g/ G IR lxn is the z'th line of G, a: 6 

K > 0, p > are chosen. 

Proof: Based on Theorem 1 recalled above, the following inequalities 

(A + BU max cteM(v, -K,H)) T P + P(A + BU max cceM(v, -K,H)) < 0, 



(26) 



together with the condition e(P,p) C £(H) are sufficient to ensure that e(P,p) is contractive 
invariant for (1) with oc = e\l m ^ m , u = —u max sat(Kx). 
Again based on Theorem 1, the following inequalities 



(A + BU max M(v, -K,H)) T P + P(A + BU max M{v, -K,H)) < 0, 



(27) 



together with e(P,p) C C(H) are sufficient to ensure that s(P,p) is contractive invariant for 
(1) with oc = Imxm, u = —u max sat(Kx). Now based on the direct extension to the MIMO 
case, of Fact 1 recalled above, we conclude that e(P,p) is contractive invariant for (1) with 

U = —U m a X Sat(Kx), WoCa(t), i = 1, ...,171, S.t.,0 < €\ < &u(t) < 1. 

Next, the inequalities conditions (26), (27) under the constraint e(P,p) C C(H) can be 
transformed to LMI conditions ((17), p. 355) as follows: To find the control gain K such that 
we have the bigger estimation of the attraction domain, we can solve the LMI problem 



in fQ>0,Y,GJ 
I Q 



> 0, / > 



lax ct e ) T + (BU max ct e )M(v,Y,G) <0,\/veV 
XJ T + (BU max )M(v, Y, G) < 0, \/v e V 



(28) 



QA T + AQ + M(v, Y, G) T {BU t 
QA T + AQ + M(v, Y, G) T (BU-, 

\ g l >0, i = l,...,m, 

si Q\ ~ 

where Y = -KQ, Q = (P/p)" 1 , G = H^P/p)' 1 , M(v,Y,G) = M(v,-K,H)Q, g { = 
hiiP/p)- 1 , hi e R lxn is the z'th line of H and R > is chosen. □ 

Remark 1: To solve the problem (25) we have to deal with 2 m+1 + m + 1 LMIs, to reduce the 
number of LMIs we can force Y = G, which means K = —H(P/p)~ 1 Q~ 1 . Indeed, in this case 
the second and third conditions in (25) reduce to the two LMIs 



QA T + AQ + G T (BU n 



+ (BU m axCLe)G<0 



QA T + AQ + G T (BU max ) T + (BU max )G < 0, 



(29) 



which reduces the total number of LMIs in (25) to m + 3. ♦ 

In the next section, we report some results in the extension of the previous linear controllers 

to single input nonlinear affine plants. 



Passive Fault Tolerant Control 293 

6. FTC for nonlinear single input affine plants 

Let us consider now the nonlinear affine system 

x = f(x)+g(x)ocu, (30) 

where x G IR n , u G IR represent, respectively, the state vector and the scalar input. The vector 
fields /, columns of g are supposed to satisfy the classical smoothness assumptions, with 
/(0) = 0. The fault coefficient is such that < e\ < oc < 1. 

6.1 Problem statement 

Find a state feedback controller u(x) such that the closed-loop controlled system (44) admits x = as 
a local (global) asymptotically stable equilibrium point Va: (s.t. < e\ < oc < 1). 

6.2 Problem solution 

We follow here the same idea used above for the linear case, and associate with the faulty 
system (44) a virtual scalar output, the corresponding system writes as 



(31) 



x = f(x) + g(x)ocu 
y = k(x), 

where k : IR — > IR is a continuous function. 

Let us chose now the controller as the simple output feedback 

u = -k(x). (32) 

We can then write from (31) and (32) the closed-loop system as 

x = f(x) + g(x)v 

y = k{x) (33) 

v = —ocy. 

As before we have cast the problem of stabilizing (44), for all oc as an absolute stability problem 

(33) as defined in ((40), p. 55). We can then use the absolute stability theory to solve the 

problem. 

Proposition 5: The closed-loop system (44) with the static state feedback 

u = -k(x), (34) 

where k is such that there exist a C 1 function S : K n — > IR positive semidefinite, radially 
unbounded, i.e. S(x) —> +oo, \\x\\ —> +oo, that satisfies the PDEs 



L f S(x) = -0.5q T (x)q(x) + (j^) k 2 (> 
L g S(x)=(]±^)k(x)- q T w, 



(35) 



where the function w : IR n — > R z is s.t. w T w = t^t, and a : R n —> R z , I G N, under the 
condition of local (global) detectability of the system 



x = f(x)+g(x)v 
y = k{x), 



(36) 



294 Robust Control, Theory and Applications 

admits the origin x = as a local (global) asymptotically stable equilibrium point. 

Proof: We saw the equivalence between the problem of stabilizing (44), and the absolute 

stability problem (33), with the 'nonlinearities' sector bounds e\ and 1. Based on this, we can 

use the sufficient condition provided in Proposition 2.38 in ((40), p. 55) to ensure the absolute 

stability of the origin x = of (33), for all a: G [e\, 1]. 

First we have to ensure that the parallel interconnection of the system 

x = f{x)+g(x)v 

y = *(*), (3/) 

with the trivial unitary gain system 

y = v, (38) 

is OFP(— k), with k = y^r- and with a C 1 radially unbounded storage function S. 

Based on Definition 4, this is true if the parallel interconnection of (37) and (38) is dissipative 

with respect to the supply rate 



w{v,y) = v T y + [yzTf) fv> 



(39) 



where y = y + v. This means, based on Definition 3, that it exists a C 1 function S : IR n — > IR, 
with S(0) = and S(x) > 0, Vx, s.t. 

S(x(t)) < co(v,y) 

,2 (40) 



<^y+||p|| 2 +( T ^)||y + p| 



Furthermore, S should be radially unbounded. 

From the condition (40) and Theorem 2.39 in ((40), p. 56), we can write the following condition 
on S, k for the dissipativity of the parallel interconnection of (37) and (38) with respect to the 
supply rate (39): 

L f S(x) = -Q.5q T (x)q(x) + (^l) k 2 (x) 
L g S(x)=k(x)+2^)k(x)-q T w, 

where the function w : IR n — > M. 1 is s.t. w T w = y^r, and q : IR" — > R z , I G N. Finally, 
based on Proposition 2.38 in ((40), p. 55), to ensure the local (global) asymptotic stability of 
x = 0, the system (37) has to be locally (globally) ZSD, which is imposed by the local (global) 
detectability of (36). □ 

Solving the condition (41) might be computationally demanding, since it requires to solve a 
system of PDEs. We can simplify the static state feedback controller, by considering a lower 
bound of the condition (40). Indeed, condition (40) is true if the inequality 

S < v T y, (42) 

is satisfied. Thus, it suffices to ensure that the system (37) is passive with the storage function 
S. Now, based again on the necessary and sufficient condition given in Theorem 2.39 ((40), 
p. 56), the storage function and the feedback gain have to satisfy the condition 

LfS(x)<0 

L g S(x) = k(x). K ' 



Passive Fault Tolerant Control 



295 



V = u 


£ = (XV 

m = o 


£ , 


* = /(*)+£(*)£ 


X 


u(0) = 







Fig. 1. The model (45) in cascade form 

However, by considering a lower bound of (40), we are considering the extreme case where 
e\ —> 0, which may result in a conservative feedback gain (refer to (6)). It is worth noting 
that in that case the controller given by (52), (41), reduces to the classical damping or 
Jurdjevic-Quinn control u = —L g S(x), e.g. ((40), p. Ill), but based on a semidefinite function 

S. 

7. FTC for nonlinear multi-input affine plants with constant loss of effectiveness 
actuator faults 

We consider here affine nonlinear models of the form 



■f(x)+g(x)u, 



(44) 



where x G IR n , u G IR m represent respectively the state and the input vectors. The vector fields 
/, and the columns of g are assumed to be C 1 , with /(0) = 0. 

We study actuator's faults modelled by a multiplicative constant coefficient, i.e. a loss of 
effectiveness, which implies the following form for the faulty model 2 



■f(x)+g(x)ccu, 



(45) 
1, ..., m s.t., 



where a G ]R mxm is a diagonal constant matrix, with the diagonal elements &#, i 

< e\ < oca < 1. We write then the FTC problem as follows. 

Problem statement: Find a feedback controller such that the closed-loop controlled system (45) admits 

x = as a globally asymptotically stable (GAS) equilibrium point \/cc (s.t. < e\ < oca < 1). 



7.1 Problem solution 

Let us first rewrite the faulty model (45) in the following cascade form (see figure 1) 

x = f(x)+g(x)h(Z) 
£ = uv, £(0) = 

y = HO = Z, 



(46) 



where we define the virtual input v = u with w(0) = 0. This is indeed, a cascade form where 
the controlling subsystem, i.e. £ dynamics, is linear (40). Using this cascade form, it is possible 
to write a stabilizing controller for the faulty model (45), as follows. 



2 Hereafter, we will denote by x the states of the faulty system (45) to avoid cumbersome notations. 
However, we remind the reader that the solutions of the healthy system (44) and the faulty system (45) 
are different. 



296 Robust Control, Theory and Applications 

Theorem 2: Consider the closed-loop system that consists of the faulty system (45) and the 
dynamic state feedback 

u = -L g W(x) T -k$,u(0)=0 
t = ei (-(L g W(x)) T -kZ),ao)=0, ( ' 

where W is a C 1 radially unbounded, positive semidefinite function, s.t. LfW < 0, and k > 0. 
Consider the fictitious system 

* = /(*)+£(*)£ 

^ = e 1 (-(L g W) T + v) (48) 

y = m = e. 

If the system (48) is (G)ZSD with the input v and the output y, then the closed-loop system 
(45) with (47) admits the origin (x,£) = (0,0) as (G)AS equilibrium point. 
Proof: We first prove that the cascade system (48) is passive from v to y = £. 
To do so, let us first consider the linear part of the cascade system 

t = e x v, f (0) = 

y = *»(£) = I { ] 

The system (49) is passive, with the C 1 positive definite, radially unbounded, storage function 
IT(£) = ^£ T £- Indeed, we can easily see that V T > 



U (S(T)) = \?(T)e(T) < J q v T ydt 
<1 [ T $ T £dt 

€\ JO 

1 fZ( T ) T 11 T 

< - / ^< ^-^(T)f(T), 



which is true for < e\ < 1 

Next, we can verify that the nonlinear part of the cascade 

x = f(x)+g(x)S 

y = L g W(x), 

is passive, with the C 1 radially unbounded, positive semidefinite storage function W. Since, 
W = LfW + i-^W^ < L^W^. Thus we have proved that both the linear and the nonlinear 
parts of the cascade are passive , we can then conclude that the feedback interconnection (48) 
of (49) and (50) (see figure 2) is passive from the new input v = L g W + v to the output £, with 
the storage function S(x, £) = W(x) + U(f ) (see Theorem 2.10 in (40), p. 33). 
Finally, the passivity associated with the (G)ZSD implies that the control v = — kg, k > 
achieves (G)AS (Theorem 2.28 in (40), p. 49). 

Up to now, we proved that the negative feedback output feedback v = — k£, k > achieves 
the desired AS for oc = e\l m ^ m . We have to prove now that the result holds for all oc s.t. 
< e\ < oca < 1, even if ^ is fed back from the fault's model (46) with oc = e\l m ^ m , since 
we do not know the actual value of a. If we multiply the control law (47) by a constant gain 
matrix k = diag(k\, ...,k m ), 1 < k\ < ^-, we can write the new control as 

u = -~k(L g W(x) T -kZ),k>0 l «(0) = . . 

<t = e 1 k(-(L g W(x)) T -kt),m=0. ( ' 



Passive Fault Tolerant Control 



297 



V f 








I 




























X — f(x) -\- Q~(x)^ 










V = L g W(x) 









Fig. 2. Feedback interconnection of (49) and (50) 

It is easy to see that this gain does not change the stability result, since we can define for 
the nonlinear cascade part (50) the new storage function W = kW and the passivity is still 
satisfied from its input £ to the new output kLgW(x). Next, since the ZSD property remains 
unchanged, we can chose the new stabilizing output feedback v = —kk^, which is still a 
stabilizing feedback with the new gain kk > 0, and thus the stability result holds for all 
oc s.t. < e\ < oca < 1, i = 1, ...,m. □ 

The stability result obtained in Theorem 2, depends on the ZSD property Indeed, if the ZSD is 
global, the stability obtained is global otherwise only local stability is ensured. Furthermore, 
we note here that with the dynamic controller (47) we ensure that the initial control is zero, 
regardless of the initial value of the states. This might be important for practical applications, 
where an abrupt switch from zero to a non zero initial value of the control is not tolerated by 
the actuators. 

In Theorem 2, one of the necessary conditions is the existence of W > 0, s.t. the uncontrolled 
part of (45) satisfies LfW < 0. To avoid this condition that may not be satisfied for some 
practical systems, we propose the following Theorem. 

Theorem 3: Consider the closed-loop system that consists of the faulty system (45) and the 
dynamic state feedback 

-^W T + i Sf (f + g£)), p = diag(j5 n ,...,l3 n 



*< 



-*(£-/HC(*)) 



.), < | < ft,- < 1, 



*(f - ftC(x)) - pL g W> + ftf (/ + g), f (0) = 0, «(0) = 0, 



(52) 



where k > and the C 1 function K(x) is s.t. there exists a C 1 radially unbounded, positive 
semidefinite function W satisfying 

dW 

-fo(f(x)+g(x)pK(x)) < 0, Vx e R H , V/3 = diag(p n ,...,Pmm), < e x < fe < 1. (53) 

Consider the fictitious system 



x=f(x)+g(x)£ 



9 = s-m*)- 



(54) 



If (54) is (G)ZSD with the input 9 and the output y, for for all p s.t. ft,-, i = 1, ..., m, < e\ < 
Pa < 1- Then, the closed-loop system (45) with (52) admits the origin (x, £) = (0, 0) as (G)AS 



298 Robust Control, Theory and Applications 

equilibrium point. 

Proof: We will first prove that the controller (52) achieves the stability results for a faulty 

model with oc = e\l m ^ m and then we will prove that the stability result holds the same for all 

oc s.t. oca, i = 1, ...,m, < e\ < oca < 1. 

First, let us define the virtual output y = £ — f$K(x), we can write the model (46) with oc = 

x = f(x)+g(x)(y + pK(x)) 

t = e\hnKmV (55) 

y = Z-j5K(x), 

we can then write 

dK 
ij = erfmxmv - £— (/ + g(y + pK(x))) = v. 

To study the passivity of (55), we define the positive semidefinite storage function 

1 r,Tr, 



and write 



V = pW(x) + -fy, 



V = fSL f+gfiK W + pL g Wy + fv, 



and using the condition (53), we can write 

V<f(pL g W T + v), 

which establishes the passivity of (55) from the new input v = v + ftLgW T to the output y. 
Finally, using the (G)ZSD condition for oc = e\I m xm, we conclude about the (G)AS of (46) for 
oc = €iI mX m, with the controller (52) (Theorem 2.28 in (40), p. 49). Now, remains to prove that 
the same result holds for all oc s.t. oca, i = 1, ..., m, < e\ < oca < 1, i.e. the controller (52) has 
the appropriate gain margin. In our particular case, it is straightforward to analyse the gain 
margin of (52), since if we multiply the controller in (52) by a matrix oc, s.t. oca, i = 1, ..., m, 
< e\ < oca < 1, the new control writes as 

u = ii mx »(-«fc(f - m*)) - «p l sW t + «/*i (/ + so), 

k > 0, ,B = diag(Pi,...,p m ), < ft < 1, (56) 

t = <xm-pK{x))-apL g W T + a^ (/ + g?), f(0) = 0, M (0) = 0. 

We can see that this factor will not change the structure of the initial control (52), since it 

will be directly absorbed by the gains, i.e. we can write k = ock, with all the elements of 

diagonal matrix k positive, we can also define ft = ocfi which is still a diagonal matrix with 

bounded elements in [ei,l], s.t. (53) and (54) are is still satisfied. Thus the stability result 

remains unchanged. □ 

The previous theorems may guaranty global AS. However, the conditions required may be 

difficult to satisfy for some systems. We present below a control law ensuring, under less 

demanding conditions, semiglobal stability instead of global stability. 

Theorem 4: Consider the closed-loop system that consists of the faulty system (45) and the 

dynamic state feedback 

U = -fc(£ - U n0 m(x)), k > 0, 

t = -ke 1 (Z-u nom (x)),m=Q, «(0)=0 / { ] 



Passive Fault Tolerant Control 299 

where the nominal controller u nom (x) achieves semiglobal asymptotic and local exponential 
stability of x = for the safe system (44). Then, the closed-loop (45) with (57) admits the origin 
(x, £) = (0, 0) as semiglobal AS equilibrium point. 

Proof: The prove is a direct application of the Proposition 6.5 in ((40), p. 244), to the system 
(46), with oc = eilmxm- Any positive gain oc, s.t. 1 < a# < ^-, i = 1, ...,m, will be absorbed 
by k > 0, keeping the stability results unchanged. Thus the control law (57) stabilize (46) and 
equivalently (45) for all oc s.t. oca, i = 1, ...,m, < e\ < oca < 1. D 

Let us consider now the practical problem of input saturation. Indeed, in practical systems 
the actuator powers are limited, and thus the control amplitude bounds should be taken into 
account in the controller design. To solve this problem, we consider a more general model than 
the affine model (44). In the following we first study the problem of FTC with input saturation, 
on the general model 

x = f(x)+g(x,u)u, (58) 

where, x, u, f are defined as before, g is now function of both the states and the inputs, and 
is assumed to be C 1 w.t.r. to x, u. 
The actuator faut model, writes as 

x = f(x) + g(x,ocu)ccu, (59) 

with the loss of effectiveness matrix oc defined as before. This problem is treated in the 
following Theorem, for the scalar case where oc € \e\, 1], i.e. when the same fault occurs 
on all the actuators. 
Theorem 5: Consider the closed-loop system that consists of the faulty system (59), for 

oc G \e\, 1], and the static state feedback 

u(x) = -A(x)G(x,0) T 
G(*,0) = ^ lg (*,0) 

AW = (l+7i(|xP+4I7 2 |G(x%P))(l+|G(x / 0)P) > ° 
71 - Jo 1+^(1) aS 

7i(s) = H 2s (-riW-i)^ + s 

where W is a C 2 radially unbounded, positive semidefinite function, s.t. LfW < 0. Consider 
the fictitious system 

x = f(x)+g(x,e 1 u)e 1 u 

dW(x) ( N ( 61 ) 

If (61) is (G)ZSD, then the closed-loop system (59) with (60) admits the origin as (G)AS 

equilibrium point. Furthermore \u(x)\ <u, Vx. 

Proof: Let us first consider the faulty model (59) with oc = e\ . For this model, we can compute 

the derivative of W as 

dW(x) 
W(x) = L f W+ v ) - e x g(x,e x u)u 

dW(x) 
W(x) < — ^^eig(x,eiu)u. 

Now, using Lemma II.4 (p. 1562 in (31)), we can directly write the controller (60), s.t. 

W < -1a(x)|G(x,0)| 2 . 



300 Robust Control, Theory and Applications 

Furthermore \u(x)\ <u, \/x. 

We conclude then that the trajectories of the closed-loop equations converge to the invariant 
set{x| A(x)|G(x,0)| 2 = 0} which is equivalent to the set {x\ G(x,0) = 0}. Based on Theorem 
2.21 (p. 43, (40)), and the assumption of (G)ZSD for (61), we conclude about the (G)AS of 
the origin of (59), (60), with oc = e\. Now multiplying u by any positive coefficient oc, s.t. 
< e\ < a < 1 does not change the stability result. Furthermore, if \u(x)\ < u, Vx, then 
\au(x) | < u, Vx, which completes the proof. □ 

Remark 2: In Theorem 5, we consider only the case of scalar fault oc £ [e\, 1], i.e. the case 
of uniform fault, since we need this assumption to be able to apply the result of Lemma II.4 
in (31). However, this assumption can be satisfied in practice by a class of actuators, namely 
pneumatically driven diaphragm-type actuators (23), for which the failure of the pressure 
supply system might lead to a uniform fault of all the actuators. Furthermore, in Proposition 
6 below we treat for the case of systems affine in the control, i.e. g(x, u) = g(x), the general 
case of any diagonal matrix of loss of effectiveness coefficients. 

Proposition 6: Consider the closed-loop system that consists of the faulty system (45), and the 
static state feedback 

u(x) = -A(x)G(x) T 

G(x) = ^W« (62) 

A ( x ) = i+|Goor 

where W is a C 2 radially unbounded, positive semidefinite function, s.t. LfW < 0. Consider 
the fictitious system 

x = f{x)+g(x)e 1 u 

dW(x) f x I 63 ) 

If (63) is (G)ZSD, then the closed-loop system (45) with (62) admits the origin as (G)AS 
equilibrium point. Furthermore \u(x)\ <u, \/x. 

Proof: The proof follows the same steps as in the proof of Theorem 5, except that in this case 
the constraint of considering that the same fault occurs on all the actuators, i.e. for a scalar oc, is 
relaxed. Indeed, in this case we can directly ensure the negativeness of W, since if u is such that 
W < — A(x)L g W (x)eiLgW (x) T < 0, then in the case of a diagonal fault matrix, the derivative 
writes as W < -A{x)L g W(x)e 1 aL g W(x) T < -X(x)e\L g W(x)L g ]N(x) T < -e\\(x)\G(x)\ 2 . 
Thus, the stability result remains unchanged. □ 

Up to now we have considered the case of abrupt faults, modelled with constant loss of 
effectiveness matrices. However, in practical applications, the faults are usually time-varying 
or incipient, modelled with time-varying loss of effectiveness coefficients, e.g. (50). We 
consider in the following section this case of time-varying loss of effectiveness matrices. 

8. FTC for nonlinear multi-input affine plants with time-varying loss of 
effectiveness actuator faults 

We consider here faulty models of the form 

x = f(x)+g(x)oc(t)u, (64) 

where oc(t) is a diagonal time-varying matrix, with C 1 diagonal elements ocjj(t), i = \,...,m 
s.t., < e\ < ocuit) < 1, \/t. We write then the FTC problem as follows. 

Problem statement: Find a feedback controller such that the closed-loop controlled system (64) admits 
x = as a uniformly asymptotically stable (UAS) equilibrium point \/cx(t) (s.t. < e\ < ocu(t) < 1). 



Passive Fault Tolerant Control 301 

8.1 Problem solution 

To solve this problem we use some of the tools introduced in (24), where a generalization of 

Krasovskii-LaSalle theorem, has been proposed for nonlinear time-varying systems. 

We can first write the following result. 

Theorem 6: Consider the closed-loop system that consists of the faulty system (64) with the 

dynamic state feedback 

u = -L g W(x) T -k^ f k> 0, m(0) = 

ff = a(0(-(L g w(x)) T -kf) / f(o) = o / m 

where oc(t) is a C 1 function, s.t. < e\ < oc(t) < 1, W, and W is a C 1 , positive semidefinite 
function, such that: 

1- L f W < 0, 

2- The system x = f(x) is AS conditionally to the set M = {x \ W(x) = 0}, 

3- V(x, J) limiting solutions for the system 

* = /(*)+*(*)£ 

£ = «(f)(-(L g W) T -^) (66) 

y = *(*,£)=£, 

w.r.t. unbounded sequence {f n } in [0, oo), then if h{x, £) = 0, a.e., then either (x, £) (£q) = (0, 0) 

for some to > or (0, 0) is a a;-limit point of (x, £), i.e. Hindoo (*, £) (0 — > (0, 0). 

Then the closed-loop system (64) with (65) admits the origin (x, £) = (0, 0) as UAS equilibrium 

point. 

Proof: Let us first rewrite the system (64) for oc(t) = oc{t), in the cascade form 

* = /(*) + *(*)*(£) 

f = &(t)v, v = ic, £(0) = 0, m(0) = (67) 

3/ = ft(f ) = f- 

Replacing v = uby its value in (65) gives the feedback system 

* = /(*) +*(*)/«(£) 

j = «(f)(-L ? W(x) T + 0), J(0) = 0, m(0) = (68) 

V = *(? ) = f- 

We prove that (68) is passive from the input v to the output £. We consider first the linear part 
of (67) 

ff = aW r ff (0) = 

y = fc(£) = & 

which is passive with the storage function IT(£) = 2? T £/ i- e - ^(^/£) — £ T £ — £ T &(t) v < 
Next, we consider the nonlinear part 

* = /(*)+#(*)£ , 7m 

y = W*), (/U) 

which is passive with the storage function W(x), s.t. W = LfW + L^W£ < LgWf . 

We conclude that the feedback interconnection (68) of (69) and (70) is passive from v to f , with 



302 Robust Control, Theory and Applications 

the storage function S(x, f ) = W(x) + 11(f) (see Theorem 2.10, p. 33 in (40)). 
This implies that the derivative of S along (68) with v = — k£, k > 0, writes 

s(t,x,€)<e T e<o. 

Now we define for (68) with v = — kg, k > 0, the positive invariant set 

M ={(*,£) | W(z) + lT(£) = 0} 

M= {(x,0)|W(x) = 0}. 

We note that the restriction of (68) with v = — fcf, k > on M is x = f(x), then applying 
Theorem 5 in (18), we conclude that, under Condition 2 of Theorem 6, the origin (x, f ) = (0, 0) 
is US for the system (64) for oc = oc and the dynamic controller (65). Now, multiplying u by 
any oc(t), s.t. < e\ < &u(t) < 1, \/t, does not change neither the passivity property, nor the 
AS condition of x = f(x) on M, which implies the US of (x, J) = (0, 0) for (64), (65) \/oc(t), s.t. 
< € 1 < ocu{t) < 1, W. 
Now we first note the following fact: for any a > and any t > t q we can write 

S(t,x(t),at))-S(t ,x(t ),at0)) < - [' Jl(h(S(T)))dT = - [' k\S(T)\ 2 dT, 

J to J to 

thus we have 

f\ H (h(ar))) - a)dx < f ti(h($(T)))dT < S(t ,x(t ),Z(t )) < M;M > 0. 

J to J t 

Finally, using Theorem 1 in (24), under Condition 3 of Theorem 6, we conclude that (x,£) = 

(0,0)isUASfor(64),(65). □ 

Remark 3: The function cc in (65) has been chosen to be any C 1 time varying function, s.t. 

< e\ < cc(t) < 1, \/t. The general time-varying nature of the function was necessary in the 

proof to be able to use the results of Theorem 5 in (18) to prove the US of the faulty system's 

equilibrium point. However, in practice one can simply chose oc(t) = 1, \/t ♦. 

Remark 4: Condition 3 in Theorem 6 is general and has been used to properly prove the 

stability results in the time-varying case. However, in practical application it can be further 

simplified, using the notion of reduced limiting system. Indeed, using Theorem 3 and Lemma 

7 in (24), Condition 3 simplifies to: 

V(x, f ) solutions for the reduced limiting system 

i = /(x)+g(*)f 

£ = ^(t)(-(L g W(x)) T -*£) (71) 

where the limiting function oc<y(t) is defined us oc<y(t) = lim n ^oo oc(t + t n ) w.r.t. unbounded 
sequence {t n } in [0,oo). Then, if h(x,^) = 0, a.e., then either (x,£)(£q) = (0/0) for some 
to > or (0, 0) is a a;-limit point of (x, £ ). Now, since in our case the diagonal matrix-valued 
function oc is s.t. < e\ < ocu(t) < 1, \/t, then it obviously satisfies a permanent excitation 
(PE) condition of the form 



/ 



t+T 

(x(t)oc(t) 1 dr > rl, T > 0, r > 0, W, 
t 



Passive Fault Tolerant Control 303 

which implies, based on Lemma 8 in (24), that to check Condition 3 we only need to check the 
classical ZSD condition: 
Vx solutions for the system 

x = f{x) 
L g W(x) = 0, K/L) 

either x(t§) = for some £q > or is a o;-limit point of x. ♦ 

Let us consider again the problem of input saturation. We consider here again the more general 

model (58), and study the problem of FTC with input saturation for the time-varying faulty 

model 

x = f(x) + g{x,cc{t)u)cc{t)u, (73) 

with the diagonal loss of effectiveness matrix oc(t) defined as before. This problem is treated 
in the following Theorem, for the scalar case where oc(t) 6 [e\, 1], W, i.e. when the same fault 
occurs on all the actuators. 

Theorem 7: Consider the closed-loop system that consists of the faulty system (73) for oc € 
[e\, 1], W, with the static state feedback 

u(x) = -A(x)G(x,0) T 
G(x,0) = ^(x,0) 

AW = (l+ 7 i(|x| 2 +4U 2 |G(x 1 !o)|2))(l+|G(x / 0)|2) > ° 
71 -Jo 1+^(1)^ 



ji(s) = u: s (ii(t)-i)dt+ 



s 



~ / \ fi,r^ dW(x) d%(x,Teiu) j -^ 

where W is a C 2 , positive semidefinite function, such that: 

1- L f W < 0, 

2- The system x = f(x) is AS conditionally to the set M = {x \ W(x) = 0}, 

3- Vx limiting solutions for the system 



x = f{x) +g ( x ,e 1 u(x))(-HxHt)^(xMx,0)) T 

' ' I dW i 



y = h(x)=A(x) - 5 \™( X )g(x,0)\, 



(75) 



w.r.t. unbounded sequence {t n } in [0, oo), then if h(x) = 0, a.e., then either x(^o) — f° r some 

to > or is a o;-limit point of x. 

Then the closed-loop system (73) with (74) admits the origin x = as UAS equilibrium point. 

Furthermore \u(x)\ <u, Vx. 

Proof: We first can write, based on Condition 1 in Theorem 7 

dW 

W< -^—g(x,oc(t)u)oc(t)u, 

using Lemma II.4 in (31), and considering the controller (74), we have 

W < -yA(x)|G(x,0)| 2 , \u{x)\ <u\/x. 

Next, we define for (73) and the controller (74) the positive invariant setM= {x\ W(x) = 0}. 
Note that we can also write 

M = {x\ W(x) =0}^ {x\ G(x,0) = 0} <& {x\ u(x) = 0}. 



304 Robust Control, Theory and Applications 

Thus, the restriction of (73) on M is the system x = f(x). Finally, using Theorem 5 in (18), and 

under Condition 2 in Theorem 7, we conclude that x = is US for (73) and the controller (74). 

Furthermore if \u(x)\ < u then \cc(t)u(x)\ < u^t r x. 

Now we note that for the virtual output y = h(x) = A(x) - 5 1 ^ (x)g(x, 0) |, and a > we can 

write 

W(t,x(t))-W(t ,x(t ))<-% f \y(r)\ 2 dr = - f\{y(r))dr f 

2 Jt J t 

thus we have 



/'(p(y(T)) - a)dr < f F (y(r))dr < W(t ,x(t )) 

J t J t 



< M, M > 0. 



Finally, based on this last inequality and under Condition 3 in Theorem 7, using Theorem 1 in 
(24), we conclude that x = is UAS equilibrium point for (73), (74). □ 

Remark 5: Here again we can simplify Condition 3 of Theorem 7, as follows. Based on 
Proposition 3 and Lemma 7 in (24), this condition is equivalent to: Vx solutions for the reduced 
limiting system 

j = f(x) +g(x,e l u(x))(-Mxy 7 (t)^r(x)g(x,0)) T (7M 

y = h(x)=A(x) - 5 \™(x)g(x,0)\, ( ' 

where the limiting function oc<y(t) is defined us a.j(t) = lim n ^oo oc(t + t n ) w.r.t. unbounded 
sequence {t n } in [0, oo). Then, if h(x) = 0, a.e., then either x(to) = for some t q > or is a 
a; -limit point of x. Which writes directly as the ZSD condition: 
Vx solutions for the system 

* = f ^ (77) 

either x(to) = for some to > or is a 6t;-limit point of x. ♦ 

Theorem 7 deals with the case of the general nonlinear model (73). For the particular case of 
affine nonlinear models, i.e. g(x, u) = g(x), we can directly write the following Proposition. 
Proposition 7: Consider the closed-loop system that consists of the faulty system (64) with the 
static state feedback 

u(x) = -\(x)G(x) T 

G(x) = ^(x) (78) 

A *^ X ) = l+|G(x)| 2 - 

where W is a C 2 , positive semidefinite function, such that: 

1- L f W < 0, 

2- The system x = f(x) is AS conditionally to the setM= {x \ W(x) = 0}, 

3- Vx limiting solutions for the system 

x = f{x)+g(x){-\{xy{t)™(x)g(x)y 

y = h(x)=A(x) - 5 \^(x)g(x)\, ( ' 

w.r.t. unbounded sequence {t n } in [0, oo), then if h(x) = 0, a.e., then either x(£q) = for some 

to > or is a a;-limit point of x. 

Then the closed-loop system (64) with (78) admits the origin x = as UAS equilibrium point. 

Furthermore \u(x)\ <u, Vx. 

Proof: The proof is a direct consequence of Theorem 7. However in this case the constraint of 



Passive Fault Tolerant Control 305 

considering that the same fault occurs on all the actuators, is relaxed. Indeed, in this case we 
can directly write Va(f) G K mxm , s.t. < e 1 < oc u (t) < 1, W: 

W < -A(x)L g W(x)oc(t)L g W(x) T 

W < -A(x)e 1 L g W(x)L g W(x) T < -e^G^ 1 . 

The rest of the proof remains unchanged. □ 

If we compare the dynamic controllers proposed in the Theorems 2, 3, 4, 6 and the static 
controllers of Theorems 5, 7, we can see that the dynamic controllers ensure that the control at 
the initialization time is zero, whereas this is not true for the static controllers. In the opposite, 
the static controllers have the advantage to ensure that the feedback control amplitude stays 
within the desired bound u. We can also notice that, except for the controller in Theorem 3, all 
the remaining controllers proposed here do not involve the vector field / in there computation. 
This implies that these controllers are robust with respect to any uncertainty Af as long as the 
conditions on /, required in the different theorems are still satisfied by the uncertain vector 
field / + Af. Furthermore, the dynamic controller of Theorem 4 inherits the same robustness 
properties of the nominal controller u nom used to write equation (57) (refer to Proposition 6.5, 
(40), p. 244). 

9. Conclusion and future work 

In this chapter we have presented different passive fault tolerant controllers for linear as well 
as for nonlinear models. Firstly, we have formulated the FTC problem in the context of the 
absolute stability theory, which has led to direct solutions to the passive FTC problem for 
LTI systems with uncertainties as well as input saturations. Open problems to which this 
formulation may be applied include infinite dimension models, stochastic models as well as 
time-delay models. Secondly, we have proposed several fault tolerant controllers for nonlinear 
models, by formulating the FTC problem as a cascade passivity-based control. Although, the 
proposed formulation has led to solutions for a large class of loss of actuator effectiveness 
faults for nonlinear systems, a more general result treating component faults entering the 
system through the vector field / plus additive faults on g, as well as the complete loss of 
some actuators is still missing and should be the subject of future work. 

10. References 

[1] F. Amato. Robust control of linear systems subject to uncertain time-varying parameters. 

Lecture Notes in Control and Information Sciences. Springer edition, 2006. 
[2] M. Aizerman and F Gantmacher. Absolute stability of regulator systems. Holden-Day, 

INC. 1964. 
[3] B. Anderson and S. Vongpanitlerd. Network analysis and synthesis. Network series. 

Prentice-hall edition, 1973. 
[4] M. Basin and M. Pinsky Stability impulse control of faulted nonlinear systems. IEEE, 

Transactions on Automatic Control, 43(11):1604-1608, 1998. 
[5] M. Benosman. A survey of some recent results on nonlinear fault tolerant 

control. Mathematical Problems in Engineering, 2010. Article ID 586169, 25 pages, 

doi:10.1155/2010/586169. 
[6] M. Benosman and K.-Y. Lum. Application of absolute stability theory to robust control 

against loss of actuator effectiveness. IET Control Theory and Applications, 3(6):772-788, 

June 2009. 



306 Robust Control, Theory and Applications 

[7] M. Benosman and K.-Y. Lum. On-line references reshaping and control re-allocation 

for nonlinear fault tolerant control. IEEE, Transactions on Control Systems Technology, 

17(2):366-379, March 2009. 
[8] M. Benosman and K.-Y. Lum. Application of passivity and cascade structure to robust 

control against loss of actuator effectiveness. Int. Journal of Robust and Nonlinear Control, 

20:673-693, 2010. 
[9] M. Benosman and K.-Y. Lum. A passive actuators' fault tolerant control for affine 

nonlinear systems. IEEE, Transactions on Control Systems Technology, 18(1):152-163, 2010. 
[10] C. Bonivento, L. Gentili, and A. Paoli. Internal model based fault tolerant control of a 

robot manipulator. In IEEE, Conference on Decision and Control, 2004. 
[11] C. Bonivento, A. Isidori, L. Marconi, and A. Paoli. Implicit fault-tolerant control: 

application to induction motors. Automatica, 40(3):355-371, 2004. 
[12] C. Byrnes, A. Isidori, and J.C.Willems. Passivity, feedback equivalence, and the global 

stabilization of minimum phase nonlinear systems. IEEE, Transactions on Automatic 

Control, 36(11):1228-1240, November 1991. 
[13] M. Demetriou, K. Ito, and R. Smith. Adaptive monitoring and accommodtion of 

nonlinear actuator faults in positive real infinite dimensional systems. IEEE, Transactions 

on Automatic Control, 52(12) :2332-2338, 2007. 
[14] A. Fekih. Effective fault tolerant control design for nonlinear systems: application to a 

class of motor control system. IET Control Theory and Applications, 2(9):762-772, 2008. 
[15] T. Gruijic and D. Petkovski. On robustness of Lurie systems with multiple 

non-linearities. Automatica, 23(3):327-334, 1987. 
[16] T. Hu, B. Huang, and Z. Lin. Absolute stability with a generalized sector condition. 

IEEE, Transactions on Automatic Control, 49(4):535-548, 2004. 
[17] T. Hu, Z. Lin, and B. Chen. An analysis and design method for linear systems subject to 

actuator saturation and disturbance. Automatica, 38:351-359, 2002. 
[18] A. Iggidr and G. Sallet. On the stability of noautonomous systems. Automatica, 

39:167-171, 2003. 
[19] B. Jiang and F. Chowdhury Fault estimation and accommodation for linear MIMO 

discret-time systems. IEEE, Transactions on Control Systems Technology, 13(3):493-499, 

2005. 
[20] B. Jiang and F Chowdhury. Parameter fault detection and estimation of a class of 

nonlinear systems using observers. Journal of Franklin Institute, 342(7): 725-736, 2005. 
[21] B. Jiang, M. Staroswiecki, and V. Cocquempot. Fault accomodation for nonlinear 

dynamics systems. IEEE, Transactions on Automatic Control, 51 (9): 1578-1583, 2006. 
[22] H. Khalil. Nonlinear systems. Prentice-Hall, third edition, 2002. 
[23] M. Karpenkoa, N. Sepehri, and D. Scuseb. Diagnosis of process valve actuator faults 

using a multilayer neural network. Control Engineering Practice, 11(11):1289-1299, 

November 2003. 
[24] T.-C. Lee and Z.-P Jiang. A generalization of Krasovskii-LaSalle theorem for nonlinear 

time-varying systems: converse results and applications. IEEE, Transactions on Automatic 

Control, 50(8):1147-1163, August 2005. 
[25] F. Liao, J. Wang, and G. Yang. Reliable robust flight tracking control: An LMI approach. 

IEEE, Transactions on Control Systems Technology, 10(l):76-89, 2002. 
[26] M. Liberzon. Essays on the absolute stability theory. Automation and remote control, 

67(10):1610-1644, 2006. 



Passive Fault Tolerant Control 307 



[27; 

[28; 

[29; 

[30 

[31 

[32 
[33 

[34 

[35 
[36 

[37; 

[38 
[39 

[40 
[41 
[42 

[43 
[44 

[45 



X. Mao. Exponential stability of stochastic delay interval systems with markovian 

switching. IEEE, Transactions on Automatic Control, 47(10):1604-1612, 2002. 

H.-J. Ma and G.-H. Yang. FTC synthesis for nonlinear systems: Sum of squares 

optimization approach. In IEEE, Conference on Decision and Control, pages 2645-2650, 

New Orleans, USA,, 2007. 

M. Mahmoud, J. Jiang, and Y. Zhang. Stabilization of active fault tolerant control 

systems with imperfect fault detection and diagnosis. Stochastic Analysis and 

Applications, 21(3):673-701, 2003. 

M. Maki, J. Jiang, and K. Hagino. A stability guaranteed active fault-tolerant 

control system against actuator failures. Int. Journal of Robust and Nonlinear Control, 

14:1061-1077, 2004. 

F. Mazenc and L. Praly Adding integrations, saturated controls, and stabilization 
for feedforward systems. IEEE, Transactions on Automatic Control, 41(11):1559-1578, 
November 1996. 

P. Mhaskar. Robust model predictive control design for fault tolrant control of process 

systems. Ind. Eng. Chem. Res., 45:8565-8574, 2006. 

P. Mhaskar, A. Gani, and P. Christofides. Fault- tolrant control of nonlinear processes: 

Performance-based reconfiguration and robustness. Int. Journal of Robust and Nonlinear 

Control, 16:91-111, 2006. 

P. Mhaskar, A. Gani, N. El-Farra, C. McFall, P. Christofides, and J. Davis. Integrated 

fault detection and fault- tolerant control of nonlinear process systems. AIChE J., 

52:2129-2148, 2006. 

P. Mhaskar, C. McFall, A. Gani, P. Christofides, and J. Davis. Isolation and handling of 

actuator faults in nonlinear systems. Automatica, 44(l):53-62, January 2008. 

H. Niemann and J. Stoustrup. Reliable control using the primary and dual Youla 

parametrization. In IEEE, Conference on Decision and Control, pages 4353-4358, 2002. 

H. Niemann and J. Stoustrup. Passive fault tolerant control of a double inverted 

pendulium-a case study. Control Engineering Practice, 13:1047-1059, 2005. 

R. Patton. Fault-tolerant control systems: The 1997 situation. In IFAC Symposium 

Safe-Process' 97, pages 1033-1055, U.K., 1997. 

PG. de Lima and G.G. Yen. Accommodation controller malfunctions through fault 

tolerant control architecture. IEEE Transactions on Aerospace and Electronic Systems, 

43(2):706-722, 2007. 

R. Sepulchre, M. Jankovic, and P. Kokotovic. Constructive nonlinear control. Springer 

edition, 1997. 

A. Shumsky Algebraic approach to the problem of fault accommodation in nonlinear 

systems. In IFAC 17th Word Congress, pages 1884-1889, 2008. 

M. Staroswiecki, H. Yang, and B. Jiang. Progressive accommodation of aircraft actuator 

faults. In 6th IFAC Symposium of Fault Detection Supervision and Safety for Technical 

Processes, pages 877-882, 2006. 

G. Tao, S. Chen, and S. Joshi. An adaptive actuator failure compensation controller using 
output feedback. IEEE, Transactions on Automatic Control, 47(3):506-511, 2002. 

A. Teel and L. Praly. Tools for semiglobal stabilization by partial state and output 

feedback. 33(5): 1443-1488, 1995. 

M. Vidyasagar. Nonlinear systems analysis. Prentice-Hall, second edition, 1993. 



308 Robust Control, Theory and Applications 

[46] N. Wu, Y. Zhang, and K. Zhou. Detection, estimation, and accomodation of loss of 

control effectiveness. Int. Journal of Adaptive Control and Signal Processing, 14:775-795, 

2000. 
[47] S. Wu, M. Grimble, and W. Wei. QFT based robust/fault tolerant flight control design 

for a remote pilotless vehicle. In IEEE International Conference on Control Applications, 

China, August 1999. 
[48] S. Wu, M. Grimble, and W. Wei. QFT based robust/fault tolerant flight control 

design for a remote pilotless vehicle. IEEE, Transactions on Control Systems Technology, 

8(6):1010-1016, 2000. 
[49] H. Yang, V. Cocquempot, and B. Jiang. Robust fault tolerant tracking control 

with application to hybrid nonlinear systems. IET Control Theory and Applications, 

3(2):211-224, 2009. 
[50] X. Zhang, T. Parisini, and M. Polycarpou. Adaptive fault-tolerant control of nonlinear 

uncertain systems: An information-based diagnostic approach. IEEE, Transactions on 

Automatic Control, 49(8): 1259-1274, 2004. 
[51] Y Zhang and J. Jiang. Design of integrated fault detection, diagnosis and reconfigurable 

control systems. In IEEE, Conference on Decision and Control, pages 3587-3592, 1999. 
[52] Y Zhang and J. Jiang. Integrated design of reconfigurable fault-tolerant control systems. 

Journal of Guidance, Control, and Dynamics, 24(1):133-136, 2000. 
[53] Y Zhang and J. Jiang. Bibliographical review on reconfigurable fault-tolerant control 

systems. In Proceeding of the 5th IFAC symposium on fault detection, supervision and safety 

for technical processes, pages 265-276, Washington DC, 2003. 
[54] Y Zhang and J. Jiang. Issues on integration of fault diagnosis and reconfigurable 

control in active fault-tolerant control systems. In 6th IFAC Symposium on fault detection 

supervision and safety of technical processes, pages 1513-1524, China, August 2006. 



14 



Design Principles of Active Robust 
Fault Tolerant Control Systems 

Anna Filasova and Dusan Krokavec 

Technical University ofKosice 
Slovakia 



1. Introduction 

The complexity of control systems requires the fault tolerance schemes to provide control of 
the faulty system. The fault tolerant systems are that one of the more fruitful applications with 
potential significance for those domains in which control must proceed while the controlled 
system is operative and testing opportunities are limited by given operational considerations. 
The real problem is usually to fix the system with faults so that it can continue its mission 
for some time with some limitations of functionality. These large problems are known as the 
fault detection, identification and reconfiguration (FDIR) systems. The practical benefits of 
the integrated approach to FDIR seem to be considerable, especially when knowledge of the 
available fault isolations and the system reconfigurations is used to reduce the cost and to 
increase the control reliability and utility. Reconfiguration can be viewed as the task to select 
these elements whose reconfiguration is sufficient to do the acceptable behavior of the system. 
If an FDIR system is designed properly, it will be able to deal with the specified faults and 
maintain the system stability and acceptable level of performance in the presence of faults. 
The essential aspect for the design of fault-tolerant control requires the conception of diagnosis 
procedures that can solve the fault detection and isolation problem. The fault detection is 
understood as a problem of making a binary decision either that something has gone wrong 
or that everything is in order. The procedure composes residual signal generation (signals that 
contain information about the failures or defects) followed by their evaluation within decision 
functions, and it is usually achieved designing a system which, by processing input /output 
data, is able generating the residual signals, detect the presence of an incipient fault and isolate 
it. 

In principle, in order to achieve fault tolerance, some redundancy is necessary. So far direct 
redundancy is realized by redundancy in multiple hardware channels, fault-tolerant control 
involve functional redundancy. Functional (analytical) redundancy is usually achieved by 
design of such subsystems, which functionality is derived from system model and can be 
realized using algorithmic (software) redundancy. Thus, analytical redundancy most often 
means the use of functional relations between system variables and residuals are derived 
from implicit information in functional or analytical relationships, which exist between 
measurements taken from the process, and a process model. In this sense a residual is 
a fault indicator, based on a deviation between measurements and model-equation-based 
computation and model based diagnosis use models to obtain residual signals that are as a 
rule zero in the fault free case and non-zero otherwise. 



31 Robust Control, Theory and Applications 

A fault in the fault diagnosis systems can be detected and isolated when has to cause a 
residual change and subsequent analyze of residuals have to provide information about faulty 
component localization. From this point of view the fault decision information is capable 
in a suitable format to specify possible control structure class to facilitate the appropriate 
adaptation of the control feedback laws. Whereas diagnosis is the problem of identifying 
elements whose abnormality is sufficient to explain an observed malfunction, reconfiguration 
can be viewed as a problem of identifying elements whose in a new structure are sufficient to 
restore acceptable behavior of the system. 

1.1 Fault tolerant control 

Main task to be tackled in achieving fault-tolerance is design a controller with suitable 
reconfigurable structure to guarantee stability, satisfactory performance and plant operation 
economy in nominal operational conditions, but also in some components malfunction. 
Generally, fault-tolerant control is a strategy for reliable and highly efficient control law 
design, and includes fault-tolerant system requirements analysis, analytical redundancy 
design (fault isolation principles) and fault accommodation design (fault control requirements 
and reconfigurable control strategy). The benefits result from this characterization give a 
unified framework that should facilitate the development of an integrated theory of FDIR 
and control (fault- tolerant control systems (FTCS)) to design systems having the ability to 
accommodate component failures automatically. 

FTCS can be classified into two types: passive and active. In passive FTCS, fix controllers are 
used and designed in such way to be robust against a class of presumed faults. To ensure this a 
closed-loop system remains insensitive to certain faults using constant controller parameters 
and without use of on-line fault information. Because a passive FTCS has to maintain the 
system stability under various component failures, from the performance viewpoint, the 
designed controller has to be very conservative. From typical relationships between the 
optimality and the robustness, it is very difficult for a passive FTCS to be optimal from the 
performance point of view alone. 

Active FTCS react to the system component failures actively by reconfiguring control actions 
so that the stability and acceptable (possibly partially degraded, graceful) performance of 
the entire system can be maintained. To achieve a successful control system reconfiguration, 
this approach relies heavily on a real-time fault detection scheme for the most up-to-date 
information about the status of the system and the operating conditions of its components. 
To reschedule controller function a fixed structure is modified to account for uncontrollable 
changes in the system and unanticipated faults. Even though, an active FTCS has the potential 
to produce less conservative performance. 

The critical issue facing any active FTCS is that there is only a limited amount of reaction 
time available to perform fault detection and control system reconfiguration. Given the fact of 
limited amount of time and information, it is highly desirable to design a FTCS that possesses 
the guaranteed stability property as in a passive FTCS, but also with the performance 
optimization attribute as in an active FTCS. 

Selected useful publications, especially interesting books on this topic (Blanke et al.,2003), 
(Chen and Patton,1999), (Chiang et al.,2001), (Ding,2008), (Ducard,2009), (Simani et al.,2003) 
are presented in References. 



Design Principles of Active Robust Fault Tolerant Control Systems 31 1 

1.2 Motivation 

A number of problems that arise in state control can be reduced to a handful of 
standard convex and quasi-convex problems that involve matrix inequalities. It is 
known that the optimal solution can be computed by using interior point methods 
(Nesterov and Nemirovsky,1994) which converge in polynomial time with respect to the 
problem size and efficient interior point algorithms have recently been developed for and 
further development of algorithms for these standard problems is an area of active research. 
For this approach, the stability conditions may be expressed in terms of linear matrix 
inequalities (LMI), which have a notable practical interest due to the existence of powerful 
numerical solvers. Some progres review in this field can be found e.g. in (Boyd et al.,1994), 
(Herrmann et al.,2007), (Skelton et al.,1998), and the references therein. 

In contradiction to the standard pole placement methods application in active FTCS design 
there don't exist so much structures to solve this problem using LMI approach (e.g. 
see (Chen et al.,1999), (Filasova and Krokavec,2009), (Liao et al.,2002), (Noura et al.,2009)). To 
generalize properties of non-expansive systems formulated as Hoo problems in the bounded 
real lemma (BRL) form, the main motivation of this chapter is to present reformulated design 
method for virtual sensor control design in FTCS structures, as well as the state estimator 
based active control structures for single actuator faults in the continuous-time linear MIMO 
systems. To start work with this formalism structure residual generators are designed at first 
to demonstrate the application suitability of the unified algebraic approach in these design 
tasks. LMI based design conditions are outlined generally to posse the sufficient conditions 
for a solution. The used structure is motivated by the standard ones (Dong et al.,2009), and in 
this presented form enables to design systems with the reconfigurable controller structures. 

2. Problem description 

Through this chapter the task is concerned with the computation of reconfigurable feedback 
u(£), which control the observable and controllable faulty linear dynamic system given by the 
set of equations 

q(0=Aq(0 + B„u(0 + B / f(0 (1) 

y(0 = Cq(0 + D M u(0 + D / f(0 (2) 

where q(t) e R n , n(t) e R r ', y(t) e R m , and i(t) G R l are vectors of the state, input, 
output and fault variables, respectively, matrices A e R nxn , B u e R nxr , C € j^mxn^ 
D w e R mxr ,Bj: e R nxl ,Dj: e R mxl are real matrices. Problem of the interest is to design 
the asymptotically stable closed-loop systems with the linear memoryless state feedback 
controllers of the form 

u(f) = -K y e (f) (3) 

u(f) = -Kq c (f)-Lf c (f) (4) 

respectively. Here K 6 R rxm is the output controller gain matrix, K 6 R rxn is the nominal 
state controller gain matrix, L 6 R rxl is the compensate controller gain matrix, y e (r) is by 
virtual sensor estimated output of the system, q e (f) € R" is the system state estimate vector, 
and ( e (t) e R 1 is the fault estimate vector. Active compensate method can be applied for such 
systems, where 



D / 



D„ 



(5) 



312 



Robust Control, Theory and Applications 



and the additive term B ci (t) is compensated by the term 

-B f i e (t) = -B u U e (t) (6) 

which implies (4). The estimators are then given by the set of the state equations 

q e (f) = Aq e (r) + B„u(f) + B f f e (t) + J(y(f) - y e (f)) (7) 

f e (f) = Mf c (f)+N(y(f)-y c (f)) (8) 

y e (f) = Cq e (f)+D H u(f) + D / f e (r) (9) 

where J 6 R" xm is the state estimator gain matrix, and M € R lxl , N € j^ xm are the system 
and input matrices of the fault estimator, respectively or by the set of equation 

q /c (f) = Aq fe (t) + B u u f (t) + J( y/ (f) - D„ U/ (f) - C/q /e (f)) (10) 

y e (t) = E(y / (f) + (C-EC / )q /e (f) (11) 

where E £ R mxm is a switching matrix, generally used in such a way that E = 0, or E = I m . 



3. Basic preliminaries 

Definition 1 (Null space) Let E, E e K /zx/z / rank(E) = k < hbe a rank deficient matrix. Then the 
null space A/e o/E fs fe orthogonal complement of the row space ofE. 

Proposition 1 (Orthogonal complement) Let E, E e R hxh , rank(E) = k < h be a rank deficient 
matrix. Then an orthogonal complement E- 1 ofE is 



E°U4 



(12) 



where Uj is the null space ofE and E° is an arbitrary matrix of appropriate dimension. 
Proof. The singular value decomposition (SVD) of E, E £ R hxh , rank(E) = k < h gives 



U T EV 






E [Vx V 2 ] 



El o 12 

"21 "22 



(13) 



where U T G R hxh is the orthogonal matrix of the left singular vectors, V 6 R hxh is the 
orthogonal matrix of the right singular vectors of E and L^ £ R kxk is the diagonal positive 
definite matrix of the form 



:diag[(7i ■•• cr*], a"! > ••• >a k >0 



(14) 



which diagonal elements are the singular values of E. Using orthogonal properties of U and 
V, i.e. U T U = l h , as well as V T V = l h , and 



u, r 



[Ui u 2 ] 



II o 
I 2 



ujui 



respectively, where 1^ G R 
E = ULV T 



hxh 



is the identity matrix, then E can be written as 



[Ui u 2 ] 



Si 12 
21 22 



vj 



[Ui u 2 ] 



UiS 



1^1 



(15) 



(16) 



Design Principles of Active Robust Fault Tolerant Control Systems 



313 



where Si = EiV-[ . Thus, (15) and (16) implies 

uJe = Uj [Ui u 2 ] 



It is evident that for an arbitrary matrix E° is 



= (17) 

(18) 

(19) 

respectively, which implies (12). This concludes the proof. ■ 

Proposition 2. (Schur Complement) Let Q > 0, R > 0, S are real matrices of appropriate 
dimensions, then the next inequalities are equivalent 



E°ujE = E X E = 
E ± = E°U? 



Q S 

S T -R 



< <& 
Proof. Let the linear matrix inequality takes form 



Q + SR _1 S r 
-R 



IcT 



Q s 

S T -R 



then using Gauss elimination principle it yields 



<0 <& Q + SR _1 S J <0, R >0 



<0 



I SR _1 

I 



R 



Since 



det 



I 
R X S T I 

I SR r 

I 



Q + SR^S 1 
-R 



(20) 



(21) 



(22) 



(23) 



and it is evident that this transform doesn't change negativity of (21), and so (22) implies (20). 
This concludes the proof. ■ 

Note that in the next the matrix notations E, Q, R, S, U, and V be used in another context, too. 
Proposition 3 (Bounded real lemma) For given 7 € R and the linear system (1), (2) with i(t) = 
if there exists symmetric positive definite matrix P > such that 



A T P + PA PB W C T 



<0 



(24) 



where l r G K rxr , \ m e R mxm are the identity matrices, respectively then given system is 

asymptotically stable. 

Hereafter, * denotes the symmetric item in a symmetric matrix. 

Proof. Defining Lyapunov function as follows 

v(q(t)) = q T (*)Pq(0 + f* (y T (r)y(r) - 7 2 u T (r)u(r))dr > (25) 

where P = P T > 0, P £ R nxn ,y e R, and evaluating the derivative ofv(q(t)) with respect to 
t then it yields 



v(q(t)) = q T (f)Pq(0 + q T (t)Pq(t) + y T (t)y(t) - 7 2 u T (f)u(f) < 



(26) 



314 



Robust Control, Theory and Applications 



Thus, substituting (1), (2) with i(t) = it can be written 

v(q(t)) = (Aq(0 + B w u(f)) T Pq(f) + q T (t)T> (Aq(t) + B w u(f)) + 
+ (Cq(f) + D M u(f)) T (Cq(f) + D M u(*)) - J 2 u T (t)u(t) < 

and with notation 
it is obtained 
where 

Since 



P c = 



qJ(0=[q T 


«" r W] 


*(q(0) = q c T (f)P c q c (f) < o 


A T P + PA PB„ 

* -7 2 I r _ 


+ 


C T C C T D„ 

* D^D U 



<0 



C T C C T D W 

Schur complement property implies 



D 



[CD,] >0 



C T 



>0 



(27) 

(28) 
(29) 

(30) 
(31) 

(32) 



then using (32) the LMI (30) can now be written compactly as (24). This concludes the proof. I 
Remark 1 (Lyapunov inequality) Considering Lyapunov function of the form 



^(q«) = q T (0Pq« > o 

where P = P T > 0, P € R nxn , and the control law 

u(f) = -K (y(f) - D u u(r)) = -K Cq(r) 
where K 6 R rxm is a gain matrix. Because in this case (27) gives 

v(q(t)) = (Aq(f) + B„u(f)) T Pq(f) + q T (f)P(Aq(f) + B„u(f)) < 
then inserting (34) into (35) it can be obtained 

*(q(0) = i T (t)Pcbi(t) < o 

where 



(33) 

(34) 

(35) 

(36) 
(37) 



P cb = A 1 P + PA - PB U K C - (PB„K C) ' < 

Especially, if all system state variables are measurable the control policy can be defined as 
follows 

u(f) = -Kq(f) (38) 

and (37) can be written as 



A T P + PA - PB U K - (PB U K) T < 

Note that in a real physical dynamic plant model usually D u = 0. 



(39) 



Design Principles of Active Robust Fault Tolerant Control Systems 



315 



Proposition 4 Let for given real matrices F, G and O = O t > of appropriate dimension a matrix 
A has to satisfy the inequality 

FAG T + GA T F T - < (40) 

then any solution of A can be generated using a solution of inequality 

* T FH + GA T1 
-H 



FHF i 



<0 



(41) 



where H = H T > is a free design parameter. 

Proof. If (40) yields then there exists a matrix H _1 = H~ T > such that 

FAG T + GA T F T - + GA T H X AG T < 

Completing the square in (42) it can be obtained 



(FH + GA T )H" 1 (FH + GA T ) T ■ 
and using Schur complement (43) implies (41). 



FHF 



<0 



(42) 



(43) 



4. Fault isolation 

4.1 Structured residual generators of sensor faults 
4.1.1 Set of the state estimators 

To design structured residual generators of sensor faults based on the state estimators, all 
actuators are assumed to be fault-free and each estimator is driven by all system inputs and 
all but one system outputs. In that sense it is possible according with given nominal fault-free 
system model (1), (2) to define the set of structured estimators for k = 1, 2, ... ,m as follows 

qjteto = Afe,qfe,(0 + B M]ke u(0 + JsJkT s jk(y(0 - D M u(0) (44) 

y ke (t) = Cq ke (t) + D u u(t) (45) 

where A ke e R nxn , B uke e R nxr , J sk e R nx ( m ~ 1 ) / and T sk e Rijn-l)xm takes the next form 

"1 ••• 00000 ••• 00" 



l sk 



'■m0k ' 



00 
00 



1000 
000 10 



00 ••• 00000 



00 
00 



1 



(46) 



Note that T s £ can be obtained by deleting the k-th row in identity matrix I m . 
Since the state estimate error is defined as e^(f) = q(t) — c^ e {t) then 

ejt(t) = Aq(f) + B„u(f) - A fe q fe (f) - B ufe u(f) - ] sk T sk {y{t) - D„u(f)) 

= (A - A ke - J sfc T sfc C)q(f) + (B„ - B ufe )u(f) + A^e^t) 

To obtain the state estimate error autonomous it can be set 

Afe = A - J s jtT sfc C, B uke = B„ 



(47) 



(48) 



31 6 Robust Control, Theory and Applications 

It is obvious that (48) implies 

h(t) = A ke e k (t) = (A - ] sk T sk C)e k (t) (49) 

(44) can be rewritten as 

qjte(0 = ( A " h^skCHke(t) + B„u(t) + J sfc T sfc (y(t) - D„u(r)) = 
= Aqfc(t) + B„u(r) + J sk T sk (y(t) - (Cq fe (f) + D u u(f))) 
and (44), (45) can be rewritten equivalently as 

qfc(') = Aq fe (f) + B„u(f) + J sfc T sfc (y(f) - y ke (t)) (51) 

YfeC) = Cqfa(t) + D„u(f) (52) 

Theorem 1 The k-th state-space estimator (52), (53) is stable if there exist a positive definite symmetric 
matrix P sk > 0, P sk € R nxn and a matrix Z sk e R nx ( m ~ l ) such that 



? sk = P s \ > (53) 

^ T Vsk + PsfcA 

Then ] sk can be computed as 



A T P sit + P sfc A - Z sit T sfc C - C T T&Z£ < (54) 



J sfc = P^Zrt (55) 

Proof. Since the estimate error is autonomous Lyapunov function of the form 

v(e k (t)) = el(t)F sk e k (t) > (56) 

where F sk = Pj fc > 0, F sk e R nxn can be considered. Thus, 

v(e k (t)) = el(t) (A - hkTskC) Tl P sk e k (t) + el(t)F 8k (A - h^ s kC)e k (t) < (57) 

v(e k (t)) = el(t)F skc e k (t)<0 (58) 

respectively, where 

Psfcc = A T P sfc + P sfc A - F sk ] sk T sk C - (F sk ] sk T sk C) T < (59) 

Using notation F sk ] sk = Z sk (59) implies (54). This concludes the proof. ■ 

4.1.2 Set of the residual generators 

Exploiting the model-based properties of state estimators the set of residual generators can be 
considered as 

r s *(0 = X sjk q te (0 + Y sJfc (y(0 - D M u(*)), * = 1,2, . . ., m (60) 

Subsequently 

r s *(0 = X sjk (q(0 - ejk(0) + V sfc Cq(f) = (X sfc + Y sJk C)q(0 - X sk e k (t) (61) 



Design Principles of Active Robust Fault Tolerant Control Systems 



317 



-y t (t) 
- y.(t) 













y,(t) 

y 2 (t) - 


j 









I : 










v_ 











Fig. 1. Measurable outputs for single sensor faults 

To eliminate influences of the state variable vector it is necessary in (61) to consider 

X sfc + Y sfc C = (62) 

Choosing X sk = — T sfc C (62) implies 

*sfc = -T sfc C / Y sk = T sk (63) 

Thus, the set of residuals (60) takes the form 

's*(0 = T sjk (y(0 - D M u(0 - Cq ke (t)), * = 1,2, . . . , m (64) 

When all actuators are fault-free and a fault occurs in the Z-th sensor the residuals will satisfy 
the isolation logic 

\\*sk(t)\\<h sk ,k = l, \\i s k(t)\\>Kk,k^l (65) 

This residual set can only isolate a single sensor fault at the same time. The principle can be 
generalized based on a regrouping of faults in such way that each residual will be designed 
to be sensitive to one group of sensor faults and insensitive to others. 

Illustrative example 

To demonstrate algorithm properties it was assumed that the system is given by (1), (2) where 
the nominal system parameters are given as 






1 







1 3 








1 


, Vu = 


2 1 


5 


-9 


-5 




1 5 



1 2 1 
1 1 



D tt 



00 

00 



and it is obvious that 

T 8 i = l20l= [Ol],T s2 = I 20 2= [10] 



T sl C= [110], T s2 C= [12 1" 



Solving (53), (54) with respect to the LMI matrix variables V sk , and Z sk using 
Self-Dual-Minimization (SeDuMi) package for Matlab, the estimator gain matrix design 
problem was feasible with the results 



L'sl 



0.8258 -0.0656 0.0032 

-0.0656 0.8541 0.0563 

0.0032 0.0563 0.2199 



0.6343 

0.2242 

-0.8595 



0.8312 

0.5950 

-4.0738 



318 



Robust Control, Theory and Applications 





Fig. 2. Residuals for the 1st sensor fault 



/ 




— r 2 (t)- 




/ : 












1 : | : 


" 













r,(t) 

— r 2 (t) 


u 




-5 
-10 
-15 
-20 
-25 
-30 
-35 









15 20 25 



ig. 3. Residuals for the 2nd sensor fault 












0.8258 -0.0656 0.0032" 




0.0335 " 




0.1412 


Ps2 = 


-0.0656 0.8541 0.0563 


/ z s2 = 


0.6344 


, hi = 


1.0479 




0.0032 0.0563 0.2199 




-0.9214 




-4.4614 



respectively. It is easily verified that the system matrices of state estimators are stable with the 
eigenvalue spectra 

p(A-J s iT sl C) = {-1.0000 -2.3459 -3.0804} 

p(A-J s2 T s2 C) = {-1.5130 -1.0000 -0.2626} 
respectively, and the set of residuals takes the form 



r s i(0=[0 1] y(0 



*s2(0=[io] y(0 



1 2 1 

1 1 

1 2 1 

1 1 



qjte(') 



qjte(0 



Fig. 1-3 plot the residuals variable trajectories over the duration of the system run. The results 
show that one residual profile remain about the same through the entire run while the second 
shows step changes, which can be used in the fault isolation stage. 



Design Principles of Active Robust Fault Tolerant Control Systems 31 9 

4.2 Structured residual generators of actuator faults 
4.2.1 Set of the state estimators 

To design structured residual generators of actuator faults based on the state estimators, all 
sensors are assumed to be fault-free and each estimator is driven by all system outputs and 
all but one system inputs. To obtain this a congruence transform matrix T ak £ R nxn , k = 
1,2,... , r be introduced, and so it is natural to write 

To*q(0 = T ak Aq(t) + T ak B u n(t) (66) 

q fc (0 = A Jk q(0 + B Mjk u(0 (67) 



respectively, where 
as well as 



T flfc A / B wfc = T flfc B w (68) 



y k (t) = CT ak q(t) = Cq k (t) (69) 

The set of state estimators associated with (67), (69) for k = 1, 2, . . . , r can be defined in the 
next form 

q ke (t) = A k q ke (t) + B uke u(t) + l*y(0 - ] k y ke (t) (70) 

7ke(t) = Cqfa(0 (71) 

A ke £ R" x " / B Hfce € R" xr / J t/ L fc € K" xm . Denoting the estimate error as e k (t) = q k (t)-q ke (t) 
the next differential equations can be written 

h(t) = qjt(0 - qfe( f ) = 

= Ajtq(t) + B uJfc u(f) - AjtqjteCf) - B Hfce u(f) - Ljty(f) + J fc y fe (f) = 

= A iq (t) + B Hfc u(f) - A k (q k (t) - e k (t)) - B uke u(t)- (72) 

-L l Cq(t)+JfcC(qjfc(t)-e fc (f)) = 
= (A k - A k T ak + ] k CT ak - L k C)q(t) + (B uk - B uke )u(t) + (A k - J k C)e k (t) 

h(t) = (T afc A - A ke T ak - LfcC)q(t) + (B uk - B uke )u(t) + A ke e k (t) (73) 

respectively, where 

A fe = A i -J i C = T flt A-J fc C, fc = l,2,...,r (74) 

are elements of the set of estimators system matrices. It is evident, to make estimate error 
autonomous that it have to be satisfied 

LfcC = T flfc A - A ke T ak , B uke = B uk = T afc B w (75) 

Using (75) the equation (73) can be rewritten as 

e fc (0 = A ke e k (t) = (Ajt - J k C)e k (t) = (T ak A - ] k C)e k (t) (76) 

and the state equation of estimators are then 

qjte(0 = ( T akA - J k C)q ke (t) + B uk u(t) + L k y(t) - ] k y ke (t) (77) 

Yke(t) = Cqfc(t) (78) 



320 Robust Control, Theory and Applications 

4.2.2 Congruence transform matrices 

Generally, the fault-free system equations (1), (2) can be rewritten as 

q(*) = Aq(*) + b MJk u jk (*)+ E h uh*h(t) (79) 

h=l,h^k 

r 
y(f) = Cq(f) + D H u(f) = CAq(f) + Cb H)t u / t(f)+D H u(f) + £ Cb uh u h (t) (80) 

h=l,hjtk 

Cb uk u k (t)=y{t)-CAq{t)-D u u(t)- £ Ch «hMt) (81) 

respectively. Thus, using matrix pseudoinverse it yields 

u k (t) = (Cb uk ) el (y(t)-CAq(t)-D u u(t)- £ Cb uft u ft (f)) (82) 

h=l,h^k 

and substituting (81) 

b Bl u fc (t) = b Hfc (Cb ui ) el Cb Hfc u t (t) (83) 

(I n - b uk (Cb uk ) el C)b uk u k (t) = (84) 

respectively. It is evident that if 

Tak = I» - b Mfc (Cb wfc ) el C , fc = 1,2, . . . ,r (85) 

influence of u^(f) in (77) be suppressed (the k-th column in B uk = T afc B w is the null column, 
approximatively ) . 

4.2.3 Estimator stability 

Theorem 2 The k-th state-space estimator (77), (78) is stable if there exist a positive definite symmetric 
matrix V ak > 0, V ak e R nxn and a matrix Z ak e R nxm such that 

F ak = F T ak > (86) 

A T T^P afc + I 
Then ] k can be computed as 



A T T flfc P afc + P, fc T flfc A - Z flfc C - C T Z fl T fc < (87) 



J k = V~^Z ak (88) 

Proof. Since the estimate error is autonomous Lyapunov function of the form 

v(e k (t)) = el(t)P ak e k (t) > (89) 
where F ak = P T ak > 0, F ak eR™ can be considered. Thus, 

J7(ejt(0) = e[(t)(T fljt A - } k C) T V ak e k {t) + e[(f)P flfc (T fljt A - ] k C)e k (t) < (90) 

v(e k (t)) = eJ(t)P akc e k (t) < (91) 
respectively, where 

F akc = A T T T ak P ak + P a)t T fl/t A - P flt J t C - (P ak hC) T < (92) 

Using notation P ak J k = Z ak (92) implies (87). This concludes the proof. ■ 



Design Principles of Active Robust Fault Tolerant Control Systems 321 

4.2.4 Estimator gain matrices 

Knowing ] k , k = 1, 2, ... , r elements of this set can be inserted into (75). Thus 



L fc C = A* - A ke T ak = A* - (A* - J fc C) (I - b ujt (Cb„ fc ) el C) = 
= (h+(M-hC)Kk(Cb uk ) el )c=(h + *keKk(Cb uk ) el )c 
and 



(93) 



L k = h + A ke b uk (Cb uk )^, fc = l,2,...,r (94) 

4.2.5 Set of the residual generators 

Exploiting the model-based properties of state estimators the set of residual generators can be 
considered as 

*«*(*) = X flJk qjke(0 + YoJk(y(0 - D M u(0), fc = 1,2, . . ., m (95) 

Subsequently 

*ak(t) = X ak (T ak q(t) - ejk(O) + Y«jfcCq(0 = (X afc T afc + Y flJk C)q(0 - X, fc e fc (f) (96) 

To eliminate influences of the state variable vector it is necessary to consider 

X ak T ak + Y ak C = (97) 

X«*(I» " b uit (Cb Mfc ) el C) + Y flit C = (98) 

respectively. Choosing X ak = — C (98) gives 

- (C - Cb Mfc (Cb uit ) el C) + \ ak C = - (I m - Cb uit (Cb Mfc ) el )C + Y flit C = (99) 

i.e. 

\ ak = I m - Cb Hfc (Cb ujt ) el (100) 

Thus, the set of residuals (95) takes the form 

r«*(0 = (Im - Cb Hfc (Cb Hfc ) el )y(f) - Cq fe (f) (101) 

When all sensors are fault-free and a fault occurs in the Z-th actuator the residuals will satisfy 
the isolation logic 

IMOII < h sk , k = I, ||r sjk (*)|| > ^ * ^ / (102) 

This residual set can only isolate a single actuator fault at the same time. The principle can be 
generalized based on a regrouping of faults in such way that each residual will be designed 
to be sensitive to one group of actuator faults and insensitive to others. 



322 



Robust Control, Theory and Applications 









y 2 (t) - 


- / \ 1-" 


:f::::::::::|\::::| 




1 


j ; ; 


X : 


V 









y,(t) 

y 2 (t) " 


-f \ r 


I : V_ 


- 


\ 




\ \ ; 


- 


1 




V 



Fig. 4. System outputs for single actuator faults 





Fig. 5. Residuals for the 1st actuator fault 

Illustrative example 

Using the same system parameters as that given in the example in Subsection 4. 1.2 the next 
design parameters be computed 



b«i 



, (Cb wl ) el = [0.1333 0.0667] , T 






0.8000 -0.3333 -0.1333 
-0.4000 0.3333 -0.2667 
-0.2000 -0.3333 0.8667 



Hi 



(Cb w2 )^ 



[0.0862 0.0345" 



ifl2 



0.6667 2.0000 0.3333 

1.3333 2.0000 1.6667 

-4.3333 -8.0000 -4.6667 



, A 2 



ifll 



0.2 -0.4 
-0.4 0.8 



i fl 2 



0.6379 -0.6207 -0.2586 
-0.1207 0.7931 -0.0862 
-0.6034 -1.0345 0.5690 

1.2931 2.9655 0.6724^ 
0.4310 0.6552 1.2241 
-2.8448 -5.7241 -3.8793 

0.1379 -0.3448 
-0.3448 0.8621 



Solving (86), (87) with respect to the LMI matrix variables P^, and Z^ using 



Design Principles of Active Robust Fault Tolerant Control Systems 



323 





f ~ \^ 


r/t) 


/ 


— r 2 (t)- 


— ^ ; 


" 






7Z.I1.ZI.1.3. 









r/t) 






[ 2 W 


f 


" 


' 



Fig. 6. Residuals for the 2nd actuator fault 

Self-Dual-Minimization (SeDuMi) package for Matlab, the estimator gain matrix design 
problem was feasible with the results 



L al 



-a\ 



0.0257 0.7321' 

0.4346 0.2392 

-0.7413 -0.7469 



r«2 : 



^2 



0.2127 

0.3382 

-0.6686 



0.9808' 

0.0349 

-0.4957 



,J2 



0.7555 -0.0993 0.0619 
0.0993 0.7464 0.1223 
0.0619 0.1223 0.3920 _ 








0.3504 1.2802" 
0.9987 0.8810 

_-2.2579 -2.3825_ 


, Li = 


0.2247 

0.7807 

-2.8319 - 


1.2173 

0.7720 

-2.6695 


0.6768 -0.0702 0.0853" 
0.0702 0.7617 0.0685 
0.0853 0.0685 0.4637 _ 






0.5888 1.6625" 

0.6462 0.3270 

-1.6457 -1.4233 


, L 


2 = 


0.3878 

0.6720 

-2.6375 - 


1.5821 

0.3373 

-1.8200 



respectively. It is easily verified that the system matrices of state estimators are stable with the 
eigenvalue spectra 

p(T fl iA-JiC) = {-1.0000 - 1.6256 ± 0.3775 i} 

p(T fl2 A - J 2 C) = {-1.0000 - 1.5780 ± 0.4521 i} 
respectively, and the set of residuals takes the form 



r«i(0 



Tali*) 



0.2 
-0.4 

0.1379 
-0.3448 



-0.4 

0.8 



y(0 



1 2 1 
1 1 



qie(0 



-0.3448 
0.8621 



y(0- 



1 2 1 
1 1 



q*(0 



Fig. 4-6 plot the residuals variable trajectories over the duration of the system run. The results 
show that both residual profile show changes through the entire run, therefore a fault isolation 
has to be more sophisticated. 



324 



Robust Control, Theory and Applications 



5. Control with virtual sensors 

5.1 Stability of the system 

Considering a sensor fault then (1), (2) can be written as 

q / (0 = Aq / (f) + B M u / (0 (103) 

y f (t) = C f q f (t) + D u xi f (t) (104) 

where q/(t) G R n , xif(t) G K r are vectors of the state, and input variables of the faulty 
system, respectively, Qf G jrmxh ^ g ^ e ou tp U t ma trix of the system with a sensor fault, and 
Yf(t) G K m is a faulty measurement vector. This interpretation means that one row of Qf is 
null row. 
Problem of the interest is to design a stable closed-loop system with the output controller 



where 



u/(f) = -K y e (t) 



y e (f) = Ey/(f) + (C - EC/)q/ e (f) 



(105) 



(106) 



K G R rxm is the controller gain matrix, and E G R mxm is a switching matrix, generally used 
in such a way that E = 0, or E = l m . If E = full state vector estimation is used for control, 
if E = l m the outputs of the fault-free sensors are combined with the estimated state variables 
to substitute a missing output of the faulty sensor. 
Generally, the controller input is generated by the virtual sensor realized in the structure 



q fe (t) = Aq fe (t) + B u u f (t) + ](y f (t) - D u u f (t) - C f q fe (t)) 



(107) 



The main idea is, instead of adapting the controller to the faulty system virtually adapt the 
faulty system to the nominal controller. 

Theorem 3 Control of the faulty system with virtual sensor defined by (103) - (107) is stable in the 
sense of bounded real lemma if there exist positive definite symmetric matrices Q, R G R nxn , and 
matrices K G K rxm , J G R nxm such that 



4>! QB M K (C- 


-EC/) 


-QB M K E (C/ - D M K (C - EC/) 


* 4>2 




(D M K (C-EC/)) T 
-7 2 I r -(D u K E) f 


* * 




* * 




* — i-m 



<0 



where 



<»! 



Q(A - B„K (C - EC/)) + (A - B„K (C - EC/)) T Q 
* 2 = R(A-JC/) + (A-JC/) T R 



Proof. Assembling (103), (104), and (107) gives 



q/(0 
q/ e (0 



A 

JC/A-JC/ 



q/(0 
q/ e (0 



B u 
B„ 



U/(f) 



y/(0 = C/q/M + D H u/(t) 



(108) 

(109) 
(110) 

(111) 
(112) 



Design Principles of Active Robust Fault Tolerant Control Systems 



325 



Thus, defining the estimation error vector 

vW = q/(0-q/e(0 

as well as the congruence transform matrix 



T = T 



I 
I -I 



and then multiplying left-hand side of (111) by (114) results in 

q/(0 
q/ e (0 



A 
JC/A-JC/ 


r 1 T 


q 


f(0 
>(0 


+ T 


B M 




A 
A - JC/ 


[ q/(0 " 


+ 


B„" 




u 


/(f) 



u/(f) 



q/(0 

respectively Subsequently, inserting (105), (106) into (116), (112) gives 



q/W 



together with 



A-B M K (C-EC/) B„K (C-EC/) 
A-JC, 



q/(0 



+ 



-B M K E 




y«(0 



y/(f) = [CpD^^C-EC;) D M K (C-EC/)] 



q/W 



D„K Ey e (r) 



and it is evident, that the separation principle yields. 
Denoting 

qJW =[«!/(*) e^(0], w £ (f)=y e (f) 

~A-B M K (C-EC / )B„K (C-EC / )1 [-B M K E 

A - JC/ \ ' £ [ 

C e =[C f - D„K (C - EC/) D„K (C - EC/)] , D e = -D M K E (121) 

To accept the separation principle a block diagonal symmetric matrix F £ > is chosen, i.e. 



(113) 
(114) 

(115) 
(116) 

(117) 
(118) 

(119) 
(120) 



P £ = diag[QR] 
where Q = Q T > 0, R = R T > 0, Q, R e R nxn Thus, with (109), (110) it yields 



1 e-A-g + A £ 1 £ 



Oi QB W K (C-EC / ) 

* 4>2 



P £ Bg 



QB W K E 





(122) 



(123) 



and inserting (121), (123), into (24) gives (108). This concludes the proof. ■ 

It is evident that there are the cross parameter interactions in the structure of (108). Since the 
separation principle pre-determines the estimator structure (error vectors are independent on 
the state as well as on the input variables), the controller, as well as estimator have to be 
designed independent. 



326 



Robust Control, Theory and Applications 



5.2 Output feedback controller design 

Theorem 4 (Unified algebraic approach) A system (103), (104) with control law (105) is stable if 
there exist positive definite symmetric matrices P > 0, II = P 1 >0 such that 



B^(An + nA r )B^B^nc£ 



■/« 



< 0, i = 0,1,2, ..., m 



(124) 



r»T± 



PA + A J P 



7 2 I r 



r»T±T r'T± 






I« 



<0f = 1,2, ..., m, E = I„ 



C T1 (PA + A T P)C T1T C T1 Cj,. 

* — l m 



<0 f = 0,1,2, ... m, E = 



where 



n mT± 



(C-EC /z -) 7 
E 



(125) 



(126) 



(127) 



and B^ is the orthogonal complement to B u . Then the control law gain matrix K exists if for obtained 
P there exist a symmetric matrices H > such that 



FHF T -e z FH + G;Kj 

* -H 



<0 



(128) 



where i = 0,1,2, ... ,m, and 



©i 



PA + A T P Cj- 

* -7 2 I r 



<0, F: 



PB M " 




r(c-Ec /f ) T l 





, G = 


E 











(129) 



Proof. Considering e<j(f) = then inserting Q = P (108) implies 



*! -PB„K E (C / -D„K (C-EC / )) J 

,2 T ,~ .^f ' 



-7 Z Ir 

* 



(D„K E) 

~*-m 



<0 



(130) 



where 

4>! = P(A - B M K (C - EC/)) + (A - B W K (C - EC f )) T F (131) 

For the simplicity it is considered in the next that D w = (in real physical systems this 
condition is satisfied) and subsequently (130), (131) can now be rewritten as 



PA + A 1 P 



"f 



PB W 






K [C-ECfEO] 



Yh 
* —I, 

(C-EC f ) T 
E 




(132) 



kJ[b£poo] <o 



Design Principles of Active Robust Fault Tolerant Control Systems 



327 



Defining the congruence transform matrix 

T^diagfp- 1 l r I w ] 
then pre-multiplying left-hand side and right-hand side of (132) by (133) gives 



(133) 



AP- 1 + P _1 A i P^C 



lr-Tn 



-7 2 I 








Since it yields 



K [(C-EC^P" 1 E 



B° 



B M 







* -I w 

P-^C-ECy) 7 

E 





B^O 
I r 
l m 



(134) 



K o T [B^0 0] <0 



(135) 



pre-multiplying left hand side of (134) by (135) as well as right-hand side of (134) by 
transposition of (135) leads to inequalities 



B^(AP" 1 + P- 1 A T )B^ 



JLp-lr-T 



B 7 |P ^C 



"7 2 Ir 



<0 



(136) 



B^AP" 1 + P" 1 A T )B l [ T BjJ-P^Cj 



<0 



(137) 



respectively Considering all possible structures Ca> i = 1, 2, . . . , m associated with simple 
sensor faults, as well as fault-free regime associated with the nominal matrix C = Cm, then 
using the substitution P _1 = II the inequality (136) implies (124). 
Analogously, using orthogonal complement 



-oT± 



(C-EC f ) T 
E 




(C-EC 7 ) 7 
E 




In 



C* T± 

* lm 



(138) 



and pre-multiplying left-hand side of (132) by (138) and its right-hand side by transposition 
of (138) results in 



r »TJL 



PA + A T P 

* -7 2 I r 



r *T±_T r *T± 



C J 



<o 



(139) 



Considering all possible structures Ct\, i = 1,2, . . . , m (139) implies (125). 
Inequality (125) takes a simpler form if E = 0. Thus, now 



•^oTJL 



r c r- 


JL 


"C T± 





= 


I r 







1^ 



(140) 



328 



Robust Control, Theory and Applications 



and pre-multiplying left-hand sides of (132) by (140) and its right-hand side by transposition 
of (140) results in 

r C T± (PA + A T P)C T±T C T± Cj" 
* -7 2 I r 



(141) 



which implies (126). This concludes the proof. ■ 

Solving LMI problem (124), (125), (126) with respect to LMI variable P, then it is possible to 
construct (128), and subsequently to solve (127) defining the feedback control gain K 0/ and H 
as LMI variables. 

Note, (124), (125), (126) have to be solved iteratively to obtain any approximation P _1 = II. 
This implies that these inequalities together define only the sufficient condition of a solution, 
and so one from (P, II -1 ) can be used in design independently while verifying solution 
using the latter. Since of an approximative solution the matrix defined in (129) need not 
be negative definite, and so it is necessary to introduce into (128) a negative definite matrix 
9°r. as follows 



e^- = e /I --A<o 



(142) 



where A > 0. 

If (124), (125), (126) is infeasible the principle can be modified based on inequalities regrouping 
e.g. in such way that solving (124), (125), and (124), (126) separatively and obtaining two 
virtual sensor structures (one for E = and other for E = I m ). It is evident that virtual sensor 
switching be more sophisticated in this case. 

5.3 Virtual sensor design 

Theorem 5 Virtual sensor (107) associated with the system (103), (104) is stable if there exist symmetric 
positive definite matrix R <G R nxn , and a matrix Z e R nxm / such that 



R = R J > 



RA + A T R - ZCfi + C^-Z T < 



The virtual sensor matrix parameter is then given as 

J = R J Z 



i = 0,1,2, ...,m 



(143) 
(144) 

(145) 



Proof. Supposing that q(t) = and D w = then (108), (110) is reduced as follows 



4>2 

* -7 2 I r 



<0 



R(A-JC / ) + (A-JC / ) T R<0 

respectively. Thus, with the notation 

Z = RJ 

(147) implies (144). This concludes the proof. 



(146) 

(147) 
(148) 



Design Principles of Active Robust Fault Tolerant Control Systems 



329 



Illustrative example 

Using for E = the same system parameters as that given in the example in Subsection 4.1.2 
then the next design parameters were computed 



B 



[-0.8581 0.1907 0.4767] , C T± = [0.5774 -0.5774 0.5774] 



~f0 



1 2 1 
1 1 



, C 



'/l 



000 
1 1 



, c 



/2 



1 2 1 
000 



Solving (124) and the set of polytopic inequalities (126) with respect to P, II using the SeDuMi 
package the problem was feasible and the matrices 

0.6836 0.0569 -0.0569 

0.0569 0.6836 0.0569 

-0.0569 0.0569 0.6836 

as well as H = O.II2 was used to construct the next ones 



On 



0.5688 


0.9111 


-3.0769 


1 


1 


0.9111 


-0.9100 


-5.8103 


2 


1 


3.0769 


-5.8103 


-6.7225 


1 






-0.1 

1.0000 2.0000 1.0000 
1.0000 1.0000 

"0.7405 1.4810 0.7405 00 00" 
1.8234 1.1386 3.3044 



-0.1 



0i, ©2 



12 10000 
1100000 



To obtain negativity of 6°. the matrix A = 4.9417 was introduced. Solving the set of polytopic 
inequalities (128) with respect to K the problem was also feasible and it gave the result 



K 



-0.0734 -0.0008 
-0.1292 0.1307 



which secure robustness of control stability with respect to all structures of output matrices 
Qfi, i = 0, 1, 2. In this sense 

p(A-B u K C) = [-1.0000 -1.3941 ± 2.3919 i] 
p(A-B u K C fl ) = [-1.0000 -2.2603 ± 1.6601 i] 
p(A-B u K C f2 ) = [-1.0000 -1.1337 ± 1.8591 i] 
Solving the set of polytopic inequalities (144) with respect to R, Z the feasible solution was 



R 



0.7188 0.0010 0.0016 
0.0010 0.7212 0.0448 
0.0016 0.0448 0.1299 



-0.0006 0.4457 

0.0117 0.0701 

-0.0629 -0.5894 



Thus, the virtual sensor gain matrix J was computed as 

0.0002 0.6296 

0.0473 0.3868 

-0.5003 -4.6799 



330 



Robust Control, Theory and Applications 













y,(t) 

y 2 (t)) 


/\ ....: 




\T 




V T 




/ ^ 















y e1 (t) 

y e2 w) 


f 


I 










V 


/ 






t 


1 : : 


- 



10 15 20 25 30 



Fig. 7. System output and its estimation 

which secure robustness of virtual sensor stability with respect to all structures of output 
matrices C a-, i = 0, 1, 2. In this sense 



p(A-JC) 



-1.0000 
-1.1656 
-3.4455 



KA-JC/i) 



-1.0000 
-1.2760 
-3.7405 



p(A-B u K C f2 ) 



-1.0000 
-1.1337+ 1.8591 i 
-1.1337- 1.8591 i 



As was mentioned above the simulation results were obtained by solving the semi-definite 
programming problem under Matlab with SeDuMi package 1.2, where the initial conditions 
were set to 

q(0) = [0.2 0.2 0.2] T , q,(0) = [0 0] T 

respectively, and the control law in forced mode was 

u f (t) = -K y e (t)+w(t), w(*)= [-0.2-0.2] 7 

Fig. 7 shows the trajectory of the system outputs and the trajectory of the estimate system 
outputs using virtual sensor structure. It can be seen there a reaction time available to perform 
fault detection and isolation in the trajectory of the estimate system outputs, as well as a 
reaction time of control system reconfiguration in the system output trajectory. The results 
confirm that the true signals and their estimation always reside between limits given by static 
system error of the closed-loop structure. 

6. Active control structures with a single actuator fault 

6.1 Stability of the system 

Theorem 6 Fault tolerant control system defined by (1) - (9) is stable in the sense of bounded real 
lemma if there exist positive definite symmetric matrices Q, R <G R nxn , S <G R lxl , and matrices 



Design Principles of Active Robust Fault Tolerant Control Systems 



331 



k e R rxn , l e R rx/ , J e R nxm , M e R lxL , n e R 

0> n QB W K QB W L 

* a> 22 R(B / - JD / ) - (SNC) T 

* * 3>33 



?!xifi 



m such that 







(C-D„K) T 





(D H K) r 


SM S 


(D H L) r 


7 2 Il 





* — 7 2 I/ 





* * 


— Im 



<0 



(149) 



where 

0> n = Q(A - B M K) + (A - B W K) T Q, <D 22 = R(A - JC) + (A - JC) T R (150) 

a> 33 = S(M - ND / ) + (M - ND / ) T S (151) 

Proof. Considering equality i(t) = i(t) and assembling this equality with (1) - (4), and with 
(7) - (9) gives the result 



rq(01 




*(t) 




f(0 




[te(t)\ 





B M K 



B 



7 



B„L 



JC A-B U K-JC JD 7 B f -JV f -B u L 





NC 





-NC 



ND f 




M-ND 



y= [C-D H KD / -D„L] 



q e (0 

ie{t) 



Tq(01 




" o " 


q*(0 

f( f ) 


+ 




f(0 


Lf,wJ 








(152) 



which can be written in a compact form as 

q a (0 = A a q a (f) + f a (0 

y = C«q a (f) 



where 



qj(0 = [q T (0 q, T (0 f T (0 #(')] , «I(0 = [o T o r F(t) o r ] 



(153) 

(154) 
(155) 

(156) 
(157) 

(158) 

(159) 

where e<j(f) is the error between the actual state and the estimated state, and eAt) is the error 
between the actual fault and the estimated fault, respectively then it is possible to define the 
state transformation 



An 



B W K 



B 



/ 



B W L 



JC A-B M K-JC JD f B / -JD / -B W L 





NC 



-NC 



ND 



'/ 



M-ND 



/ 



C« = [C -D w KD r D M L] 



Using notations 



e,(0 = q(0-q*(0/ e/Ct) = f(t) - f e (f) 



q/»(0 = Tq»(t) 



"q(0' 

f(f) 
e/(0 



,f^(f)=Tf„(f) 






i(t) 

m 



, T 



I 
I -I 
10 
I -I 



(160) 



332 



Robust Control, Theory and Applications 



and to rewrite (154), (155) as follows 

qp(0=A /J q /J (t)+f/j(0 
y = Cpqp(0 



where 



TA a T 



A B M K B f -B U L B U L 
A - JC B f - JD f 



-NC -M M-ND 



/J 



C^ = CJ" 1 = [ C - D W K D W K D f - D W L D W L ] 



Since (5) implies 



B 7 - B W L = 0, 



V f - D M L = 



it obvious that (163), (165) can be simplified as 

Aa = TA^T" 1 



(161) 
(162) 

(163) 

(164) 
(165) 

(166) 
(167) 

(168) 
(169) 

(170) 
(171) 

(172) 
To apply the separation principle a block diagonal symmetric matrix P^ > has to be chosen, 



A B W K B M L 

A - JC Bf-JDf 



-NC -MM-NDf 



C^ = CJ" 1 = [C - D M K D W K D M L] 
Eliminating out equality i(t) = i(t) it can be written 

q*(0 = A,q*(0+B*w*(0 

y = C s qs(t) + D 3 w 3 (t) 



where 



qj(0 = [q T (0 <W ej(t)] , wj(t) = [f{t) '?{t)\ 



A B M K B W L 

A - JC B / - JD / 



-NC M-ND 



/J 






0" 








-M I 



C s = [C-D M KD„KD H L], D,= [0 0] 



P^ = diag[QRS] 
where Q, R 6 R nxn , S 6 R lx} . Thus, with (150), (151) it yields 



FgAs + AiVs 



On QB U K QB„L 

* *22 R(B / -JD / )- (SNC) T 

* * *33 



,V S B S 






0" 








-SM S 



(173) 



(174) 



and inserting (171), (172), and (174) into (24) gives (149). This concludes the proof. 



Design Principles of Active Robust Fault Tolerant Control Systems 



333 



6.2 Feedback controller gain matrix design 

It is evident that there are the cross parameter interactions in the structure of (149). Since the 
separation principle pre-determines the estimator structure (error vectors are independent on 
the state as well as on the input variables), at the first design step can be computed a feedback 
controller gain matrix K, and at the next step be designed the estimators gain matrices J G 
R nxm , M e R /x/ , N e R lxm , including obtained K. 

Theorem 7. For a fault-free system (1), (2) exists a stable nominal control (4) if there exist a positive 
definite symmetric matrix X > 0, X <G R nxn , a matrix Y £ R rxn , and a positive scalar 7 > 0, 7 <G R 
such that 

X 



AX + XA T - Y T B 7 T 



X T >0 
B M Y B W L XC T -Y T D£ 

-7 2 Il L T D£ 

* — i-m 



<0 



The control law gain matrix is then given as 

K = YX _1 
Proof. Considering e^(f) = 0, then separating q(f) from (168)-(169) gives 

q(f)=A°q(t) + B°w°(f) 

y(0 = C°q(f) + D°w°(f) 

where 

w°(t) = e / (0 

A° = A - B„K, B° = B U L, C° = C - D U K, 

and with (181), and P = Q inequality (24) can be written as 



D° 



D U L 



TnTl 



QA + A T Q - QB U K - K t b£q QB u L C t - K j D 

- 7 2 i ; L T r>l 



<0 



Introducing the congruence transform matrix 

H^diagfQ" 1 Ijl m ] 
then multiplying left-hand side, as well right-hand side of (182) by (183) gives 



AQ 1 + Q 1 A i -bjkq^-qicb; b m l qh^-k^d 



:Tj^T\ 






-Im 



<0 



With notation 



X > 0, KQ _1 



(175) 
(176) 

(177) 

(178) 
(179) 

(180) 
(181) 

(182) 

(183) 

(184) 
(185) 



(184) implies (176). This concludes the proof. 



334 



Robust Control, Theory and Applications 



6.3 Estimator system matrix design 

Theorem 8 For given scalar j > 0, j e R, and matrices Q = Q T > 0, Q e R nxn , K e R rxn , 
L G R rxl estimators (7) - (9) associated with the system (1), (2) are stable if there exist symmetric 



positive definite matrices R e R nxn , S e K /x/ , and matrices Z e R nxm , V e R lxl , W e R 



lxl 



?lxm 



such that 



R 

S 



O22 RB 



'/■ 



ZDy 



R T > 
S T >0 
(WC) T 



33 



-V 

- 7 2 I/ 





s 



-7 2 Il 



(D„K) r 
(D H L) r 





Am 



where 



■ C T Z T , 



o 



33 



O22 = RA - ZC + A J R 

The estimators matrix parameters are then given as 

M = S _1 V, N = S _1 W, J = R" ] 
Proof. Supposing that q(f) = then (149) is reduced as follows 



V- WD f + V T -DTW T 



r 



(186) 
(187) 

(188) 

(189) 
(190) 



<D 22 R(B / - JD / ) - (SNC) T 

-SM 



3>33 



-7 2 I/ 





s 



-7 2 Il 



(D W K) T 

(D W L) T 





— Am 



<o 



(191) 



where 

«D 22 = R(A-JC) + (A-JC) T R / 

Thus, with notation 



<D 



33 



S(M - ND / ) + (M - ND / ) T S (192) 

SM = V, SN = W, RJ = Z 



(193) 



(191), (192) implies (188), (189). This concludes the proof. 

It is obvious that V e = A — JC, as well as M have to be stable matrices. 



6.4 Illustrative example 

To demonstrate algorithm properties it was assumed that the system is given by (1), (2) where 



1 



0" 

1 




Vu = 


"1 3" 
2 1 




Bf = 


"1" 

2 




L = 


"-1" 



5 -9 -5_ 




15 




1 




C = 


"1 
1 


2 1" 
1 


, D, 


1 — 


00" 
00 


> »J 




"0" 








Design Principles of Active Robust Fault Tolerant Control Systems 



335 













f(t) 


1 




ft : : 




V 


I 


f 



10 15 20 25 30 




15 20 25 30 35 40 

t[s] 



Fig. 8. The first actuator fault as well as its estimation and system input variables 










f 




y e1 w 

y e2 W) - 


Hi-- : 







If ; 



20 25 30 35 



Fig. 9. System output and its estimation 

Solving (175), (176) with respect to the LMI matrix variables 7, X, and Y using 
Self-Dual-Minimization (SeDuMi) package for Matlab, the feedback gain matrix design 
problem was feasible with the result 



I.7454 -0.8739 0.0393 

-0.8739 1.3075 -0.5109 

0.0393 -0.5109 2.0436 



0.9591 1.2907-0.1049 
-0.1950 -0.5166 -0.4480 



7 = 1.8509 



K 



1.2524 1.7652 0.0436 
-0.0488 -0.2624 -0.3428 



In the next step the solution to (186) - (188) using design parameters 7 = 1.8509 was also 
feasible giving the LMI variables 

V = -1.3690, S = 1.1307, W = [0.9831 0.7989] 



R 



1.7475 0.0013 0.0128 
0.0013 1.4330 0.0709 
0.0128 0.0709 0.6918 



-0.0320 1.0384 

0.1972 0.1420 

-2.0509 -1.1577 



336 



Robust Control, Theory and Applications 



which gives 



0.0035 0.6066 

0.2857 0.1828 

-2.9938 -1.7033 



N 



0.8694 0.7066 



M: 



-1.2108 



Since M < it is evident that the fault estimator is stable and verifying the rest subsystem 
stability it can see that 



^qe 



A-B M K 



A-JC 



-1.1062 0.0282 0.9847 
-2.4561 -3.2659 1.2555 
-6.0087 -9.4430 -3.3297_ 

-0.6101 0.3864 -0.0035" 
-0.4684 -0.7541 0.7143 
-0.3029-1.3092-2.0062 



{-0.7110 - 3.4954 ±i 4.3387} 



Q(A qe ) = {-1.0000 - 1.1852 ±i 0.7328} 



where q(-) is eigenvalue spectrum of a real square matrix. It is evident that the designed 
observer-based control structure results the stable system. 

The example is shown of the closed-loop system response in the autonomous mode where Fig. 
8 represents the first actuator fault as well as its estimation, and the system input variables, 
respectively, and Fig. 9 is concerned with the system outputs and its estimation, respectively. 

7. Concluding remarks 

This chapter provides an introduction to the aspects of reconfigurable control design method 
with emphasis on the stability conditions and related system properties. Presented viewpoint 
has been that non-expansive system properties formulated in the Hoo design conditions 
underpins the nature of dynamic and feedback properties. Sufficient conditions of asymptotic 
stability of systems have thus been central to this approach. Obtained closed-loop eigenvalues 
express the internal dynamics of the system and they are directly related to aspects of system 
performance as well as affected by the different types of faults. On the other hand, control 
structures alternation achieved under virtual sensors, or by design or re-design of an actuator 
fault estimation can be done robust with respect of unaccepted faults. The role and significance 
of another reconfiguration principles may be found e.g. in the literature (Blanke et al.,2003), 
(Krokavec and Filasova,2007), (Noura et al.,2009), and references therein. 

8. Acknowledgment 

The main results presented in this chapter was derived from works supported by VEGA, 
Grant Agency of Ministry of Education and Academy of Science of Slovak Republic, under 
Grant No. 1/0256/11. This support is very gratefully acknowledged. 

9. References 

[Blanke et al.,2003] Blanke, M.; Kinnaert, M; Lunze, J. & Staroswiecki, M. (2003). Diagnosis 

and Fault-tolerant Control Springer, ISBN 3-540-01056-4, Berlin. 
[Boyd et al.,1994] Boyd, D.; El Ghaoui, L.; Peron, E. & and Balakrishnan, V. (1994). Linear 

Matrix Inequalities in System and Control Theory. SIAM Society for Industrial and Applied 

Mathematics, ISBN, 0-89871-334-X, Philadelphia 



Design Principles of Active Robust Fault Tolerant Control Systems 337 

[Chen and Patton,1999] Chen, J. & Patton, R.J. (1999). Robust Model-Based Fault Diagnosis for 

Dynamic Systems. Kluwer Academic Publishers, ISBN 0-7923-8411-3, Norwell. 
[Chen et al.,1999] Chen, J.; Patton, R.J. & Chen, Z. (1999). Active fault-tolerant flight control 

systems design using the linear matrix inequality method. Transactions of the Institute of 

Measurement and Control, ol. 21, No. 2, (1999), pp. 77-84, ISSN 0142-3312. 
[Chiang et al.,2001] Chiang, L.H.; Russell, E.L. & Braatz, R.D. (2001). Fault Detection and 

Diagnosis in Industrial Systems. Springer, ISBN 1-85233-327-8, London. 
[Ding,2008] Ding, S.X. (2008). Model-based Fault Diagnosis Techniques: Design Schemes, 

Alghorithms, and Tools. Springer, ISBN 978-3-540-76304-8, Berlin. 
[Dong et al.,2009] Dong, Q.; Zhong, M. & Ding, S.X. (2009). On active fault tolerant control 

for a class of time-delay systems. Preprints of 7th IFAC Symposium on Fault Detection, 

Supervision and Safety of Technical Processes, SAFEPROCESS 2009, pp. 882-886, Barcelona, 

Spain, June 30, - July 3, 2009. 
[Ducard,2009] Ducard, G.J.J. (2009). Fault-tolerant Flight Control and Guidance Systems. Practical 

Methods for Small Unmanned Aerial Vehicles. Springer, ISBN 978-1-84882-560-4, London. 
[Filasova and Krokavec,2009] Filasova, A. & Krokavec, D. (2009). LMI-supported design of 

residual generators based on unknown-input estimator scheme. Preprints ot the 6 th IFAC 

Symposium on Robust Control Design ROCOND '09, pp. 313-319, Haifa, Israel, June 16-18, 

2009, 
[Gahinet et al.,1995] P. Gahinet, P.; Nemirovski, A.; Laub, A.J. & Chilali, M. (1995). EMI Control 

Toolbox User's Guide, The Math Works, Natick. 
[Herrmann et al.,2007] Herrmann, G.; Turner, M.C. & Postlethwaite, I. (2007). Linear matrix 

inequalities in control. Mathematical Methods for Robust and Nonlinear Control, Turner, 

M.C. and Bates, D.G. (Eds.), pp. 123-142, Springer, ISBN 978-1-84800-024-7, Berlin. 
[Jiang,2005] Jiang, J. (2005). Fault-tolerant Control Systems. An Introductory Overview. Acta 

Automatica Sinica, Vol. 31, No. 1, (2005), pp. 161-174, ISSN 0254-4156. 
[Krokavec and Filasova,2007] Krokavec, D. & Filasova, A. (2007). Dynamic Systems Diagnosis. 

Elfa, ISBN 978-80-8086-060-8, Kosice (in Slovak). 
[Krokavec and Filasova,2008] Krokavec, D. & Filasova, A. (2008) Diagnostics and 

reconfiguration of control systems. Advances in Electrical and Electronic Engineering, Vol. 

7, No. 1-2, (2008), pp. 15-20, ISSN 1336-1376. 
[Krokavec and Filasova,2008] Krokavec D. & A. Filasova, A. (2008). Performance of 

reconfiguration structures based on the constrained control. Proceedigs of the 17 th IFAC 

World Congress 2008, pp. 1243-1248, Seoul, Korea, July 06-11, 2008. 
[Krokavec and Filasova,2009] Krokavec, D. & Filasova, A. (2009). Control reconfiguration 

based on the constrained LQ control algorithms. Preprints of 7 th IFAC Symposium on 

Fault Detection, Supervision and Safety of Technical Processes SAFEPROCESS 2009, pp. 

686-691, Barcelona, Spain, June 30, - July 3, 2009. 
[Liao et al.,2002] Liao, E; Wang, J.L. & Yang, G.H. (2002). Reliable robust flight tracking 

control. An LMI approach. IEEE Transactions on Control Systems Technology, Vol. 10, No. 

1, (2002), pp. 76-89, ISSN 1063-6536. 
[Nesterov and Nemirovsky,1994] Nesterov, Y; & Nemirovsky, A. (1994). Interior Point 

Polynomial Methods in Convex Programming. Theory and Applications, SIAM, ISBN 

0-89871-319-6, Philadelphia. 
[Nobrega et al.,2000] Nobrega, E.G.; Abdalla, M.O. & Grigoriadis, K.M. (2000). LMI-based 

filter design for fault detection and isolation. Proceedings of the 39 th IEEE Conference 

Decision and Control 2000, Vol. 5, pp. 4329-4334, Sydney, Australia, December 12-15, 2000. 



338 Robust Control, Theory and Applications 

[Noura et al.,2009] Noura, H.; Theilliol, D.; Ponsart, J.C. & Chamseddine, A. (2009). 

Fault-tolerant Control Systems. Design and Practical Applications. Springer, ISBN 

978-1-84882-652-6, Berlin. 
[Patton,1997] Patton. R.J. (1997). Fault-tolerant control. The 1997 situation. Proceedings of 

the 3 rd IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes 

SAFEPROCESSS97, Vol. 2, pp. 1033-1054, Hull, England, August 26-28, 1997. 
[Peaucelle et al.,2002] Peaucelle, D.; Henrion, D.; Labit, Y. & Taitz, K. (2002). User's Guide for 

Sedumi Interface 1.04. LAAS-CNRS, Toulouse. 
[Simani et al.,2003] Simani, S.; Fantuzzi, C. & Patton, R.J. (2003). Model-based Fault Diagnosis in 

Dynamic Systems Using Identification Techniques. Springer, ISBN 1-85233-685-4, London. 
[Skelton et al.,1998] Skelton, R.E., Iwasaki, T. & Grigoriadis, K. (1998) A Unified Algebraic 

Approach to Linear Control Design. Taylor & Francis, ISBN 0-7484-0592-5, London. 
[Staroswiecki,2005] Staroswiecki, M. (2005). Fault tolerant control. The pseudo-inverse 

method revisited. Proceedings of the 16 th IFAC World Congress 2005, Prag, Czech Repulic, 

July 4-8, 2005. 
[Theilliol et al.2008] Theilliol, D.; Join, C. & Zhang, Y.M. (2008). Actuator fault tolerant 

control design based on a reconfigurable reference input. International Journal of Applied 

Mathematics and Computer Science, Vol.18, No.4, (2008), pp. 553-560, ISSN 1641-876X. 
[Zhang and Jiang,2008] Zhang, Y.M. & Jiang, J. (2008). Bibliographical review on 

reconfigurable fault-tolerant control systems. Annual Reviews in Control, Vol. 32, (2008), 

pp. 229U-252, ISSN 1367-5788. 
[Zhou and Ren,2001] Zhou, K.M. & Ren, Z. (2001). A new controller architecture for high 

performance, robust and fault tolerant control. IEEE Transactions on Automatic Control, 

Vol. 40, No. 10, (2001), pp. 1613-1618, ISSN 0018-9286. 



15 



Robust Model Predictive Control for 

Time Delayed Systems with 

Optimizing Targets and Zone Control 

Alejandro H. Gonzalez 1 and Darci Odloak 2 

institute of Technological Development for the Chemical Industry (INTEC), 

CONICET - Universidad Nacional del Litoral (U.N.L.). 

Giiemes 3450, (3000) Santa Fe, 

department of Chemical Engineering, University of Sao Paulo, 

Av. Prof. Luciano Gualberto, trv 3 380, 61548 Sao Paulo, 

Argentina 
2 Brazil 



1. Introduction 

Model Predictive Control (MPC) is frequently implemented as one of the layers of a control 
structure where a Real Time Optimization (RTO) algorithm - laying in an upper layer of this 
structure - defines optimal targets for some of the inputs and/ or outputs (Kassmann et al., 
2000). The main scope is to reach the most profitable operation of the process system while 
preserving safety and product specification constraints. The model predictive controller is 
expected to drive the plant to the optimal operating point, while minimizing the dynamic 
error along the input and output paths. Since in the control structure considered here the 
model predictive controller is designed to track the optimal targets, it is expected that for 
nonlinear process systems, the linear model included in the controller will become uncertain 
as we move from the design condition to the optimal condition. The robust MPC presented 
in this chapter explicitly accounts for model uncertainty of open loop stable systems, where 
a different model corresponds to each operating point of the process system. In this way, 
even in the presence of model uncertainty, the controller is capable of maintaining all 
outputs within feasible zones, while reaching the desired optimal targets. In several other 
process systems, the aim of the MPC layer is not to guide all the controlled variables to 
optimal targets, but only to maintain them inside appropriate ranges or zones. This strategy 
is designated as zone control (Maciejowski, 2002). The zone control may be adopted in some 
systems, where there are highly correlated outputs to be controlled, and there are not 
enough inputs to control all the outputs. Another class of zone control problems relates to 
using the surge capacity of tanks to smooth out the operation of a process unit. In this case, 
it is desired to let the level of the tank to float between limits, as necessary, to buffer 
disturbances between sections of a plant. The paper by Qin and Badgwell (2003), which 
surveys the existing industrial MPC technology, describes a variety of industrial controllers 
and mention that they always provide a zone control option. Other example of zone control 
can be found in Zanin et al, (2002), where the authors exemplify the application of this 



340 Robust Control, Theory and Applications 

strategy in the real time optimization of a FCC system. Although this strategy shows to have 
an acceptable performance, stability is not usually proved, even when an infinite horizon is 
used, since the control system keeps switching from one controller to another throughout 
the continuous operation of the process. 

There are several research works that treat the problem of how to obtain a stable MPC with 
fixed output set points. Although stability of the closed loop is commonly achieved by 
means of an infinite prediction horizon, the problem of how to eliminate output steady state 
offset when a supervisory layer produces optimal economic set points, and how to explicitly 
incorporate the model uncertainty into the control problem formulation for this case, remain 
an open issue. For the nominal model case, Rawlings (2000), Pannochia and Rawlings (2003), 
Muske and Badgwell (2002), show how to include disturbance models in order to assure 
that the inputs and states are led to the desired values without offset. Muske and Badgwell 
(2002) and Pannochia and Rawlings (2003) develop rank conditions to assure the 
detectability of the augmented model. 

For the uncertain system, Odloak (2004) develops a robust MPC for the multi-plant 
uncertainty (that is, for a finite set of possible models) that uses a non-increasing cost 
constraint (Badgwell, 1997). In this strategy, the MPC cost function to be minimized is 
computed using a nominal model, but the non-increasing cost constraint is settled for each 
of the models belonging to the set. The stability is then achieved by means of the recursive 
feasibility of the optimization problem, instead of the optimality. On the other hand, there 
exist some recent MPC formulations that are based on the existence of a control Lyapunov 
function (CLF), which is independent of the control cost function. Although the construction 
of the CFL may not be a trivial task, these formulations also allow the explicit 
characterization of the stability region subject to constraints and they do not need an infinite 
output horizon. Mashkar et al. (2006) explore this approach for the control of nominal 
nonlinear systems, and Mashkar (2006) extends the approach for the case of model 
uncertainty and control actuator fault. More recently, Gonzalez et al. (2009) extended the 
infinite horizon approach to stabilize the closed loop with the MPC controller for the case of 
multi-model uncertainty and optimizing targets. They developed a robust MPC by adapting 
the non-increasing cost constraint strategy to the case of zone control of the outputs and it is 
desirable to guide some of the manipulated inputs to the targets given by a supervisory 
stationary optimization stage, while maintaining the controlled output in their 
corresponding zones, taking into account a finite set of possible models. This problem, that 
seems to interchange an output tracking by an input-tracking formulation, is not trivial, 
since once the output lies outside the corresponding zone (because of a disturbance, or a 
change in the output zones), the priority of the controller is again to control the outputs, 
even if this implies that the input must be settled apart from its targets. 

Since in many process systems, mainly from the chemical and petrochemical industries, the 
process model shows significant time delays, the main contribution of this chapter is to 
extend the approach of Gonzalez et al. (2009) to the case of input delayed multi-model 
systems by introducing minor modifications in the state space model, in such a way that the 
structure of the control algorithm is preserved. Simulation of a process system of the oil 
refining industry illustrates the performance of the proposed strategy. 

2. System representation 

Consider a system with nu inputs and ny outputs, and assume for simplicity that the poles 
relating any input Uj to any output y; are non-repeated. To account for the implementation of 



Robust Model Predictive Control for Time Delayed Systems 
with Optimizing Targets and Zone Control 



341 



an intuitive MPC formulation, an output prediction oriented model (OPOM) originally 
presented in Odloak (2004) is adapted here to the case of time delayed systems. Let us 
designate # f the time delay between input Uj and output y\, and define p > max# • Then, 
the state space model considered here is defined as follows: l, i 



x(k + l) = Ax(k) + BAu(k) 
y(k) = Cx(k) 



where x(k) = [y(k) T y(k + l) T ••• y(k + p) T x s (k) T x d (k) T J 



(1) 



U 


• 


• 










- S, ~ 





ny 


• 










S 2 





• 


1-ny 








, B = 


S v 





• 


• 


ny 


5F((? + i)r) 




Sp+i 





• 


• 


m J 







B s 





• 


• 





F 




B d 



C = V«y 



X 



s g K ny , x d g C nd , Fe C n 







•] 



(2) 



W g K nymd , I ny = diag([l • • • 1]) g K nyxn v . 

The advantage of using the structure of the transition matrix A is that the state vector is 
divided into components that are associated to the system modes. In the state equation (1), 
the state components x s correspond to the (predicted) output steady state, which are in 
addition the integrating modes of the system (the integrating modes are induced by the 
incremental form of the inputs), and the components x d correspond to the stable modes of 
the system. Naturally, when the system approaches steady state these last components tend 
to zero. For the case of non-repeated pole, F is a diagonal matrix with components of the 
form e nT where r\ is a pole of the system and T is the sampling period. It is assumed that the 
system has nd stable poles and B s is the gain matrix of the system. The upper left block of 
matrix A is included to account for the time delay of the system. Si, ... , S v +\ are the step 
response coefficients of the system. Matrix *F , which appears in the extended state matrix, 
is defined as follows 



nt)- 



Mt) o ... o 
o Mt) - o 



o 



o 



where 



m = [e r >^'-^ - eWM-4) 



M) 



r inul (t-O inu ) 

a i, mi A V i, nn I 



r inuna (t-a 



«], 



342 Robust Control, Theory and Applications 

r if : k , with k=l,...,na, are the poles of the transfer function that relates input Uj and output 
\ji and na is the order of this transfer function. It is assumed that na is the same for any 
pair (uj, \)i). The time delay affects the dimension of the state matrix A through parameter p 
and the components of matrix W . Input matrix B is also affected by the value of the time 
delay as the step response coefficients S n will be equal to zero for any n smaller than the 
time delay. 

2.1 Model uncertainty 

With the model structure presented in (1), model uncertainty is related to uncertainty in 
matrices F, B s , B d and the matrix of time delays 6 . The uncertainty in these parameters also 
reflects in the uncertainty of the step response coefficients, which appear in (2). There are 
several practical ways to represent model uncertainty in model predictive control. One 
simple way to represent model uncertainty is to consider the multi-plant system (Badgwell, 
1997), where we have a discrete set Q of plants, and the real plant is unknown, but it is 
assumed to be one of the components of this set. With this representation of model 
uncertainty, we can define the set of possible plants as Q = {& 1 ,...,& L } where each © n 
corresponds to a particular plant: © n = (f, B s , B d ,6\ , n = 1, ..., L . 

Also, let us assume that the true plant, which lies within the set Q is designated as <9r and 
there is a most likely plant that also lies in Q and is designated as <9 N . In addition, it is 
assumed that the current estimated state corresponds to the true plant. 

Badgwell (1997) developed a robust linear quadratic regulator for stable systems with the 
multi-plant uncertainty. Later, Odloak (2004) extended the method of Badgwell to the 
output tracking of stable systems considering the same kind of model uncertainty. These 
strategies include a new constraint corresponding to each of the models lying in Q, that 
prevents an increase in the true plant cost function at successive time steps. More recently, 
Gonzalez and Odloak (2009) presented an extension of the method by combining the 
approach presented in Odloak (2004) with the idea of including the output set point as a 
new restricted optimization variable to develop a robust MPC for systems where the control 
objective is to maintain the outputs into their corresponding feasible zone, while reaching 
the desired optimal input target given by the supervisory stationary optimization. In this 
work the controller proposed by Gonzalez et al. (2009) is extended to the case of uncertain 
systems with time delays. 

2.2. System steady state 

As was already said, one of the advantages of the model defined in (1) and (2) is that the 
state component x s (k) represents the predicted output at steady state, and furthermore this 
component concentrates the integrating modes of the system. Observe that for the model 
defined in (1) and (2), if Au(k + ;) = for ; > , then the future states can be computed as 
follows 

x(k + j) = A j x(k) 

Assuming that F has all the eigenvalues inside the unit circle (i.e. the system is open loop 
stable), it is easy to show that 



Robust Model Predictive Control for Time Delayed Systems 
with Optimizing Targets and Zone Control 



343 



limA ; : 



Then, it becomes clear that Km x(k + j) = \. 



"0 


• • l ny °1 





- l «y 





- l ny ° 





••• 


^=r 


X s (kf ... 



: s (k) T o] and consequently 



limy (A: + /') = Climx(A: + ;') = x s (k) • Therefore, x s (k) can be interpreted as the prediction of the 

;->oo ;->oo 

output at steady state. The state component x s (k) is assumed to be known or estimated 
through a stable state observer. A stable observer for the model defined in (1) and (2) is 
given by 

x(k + l\k + l) = x(k + l\k) + K(y(k + l)-Cx(k + l\k)) 
where K = I • • • I is the observer gain, and 

x(k + l\k + l) = x(k + l\k) + K(y(k + l)-Cx(k + l\k)) 

x(k + l\k + l) = (I-KC)Ax(k\k) + (I-KC)BAu(k) 

For open loop stable systems this is a stable observer as matrix (I-KC)A has the 
eigenvalues of F and the remaining eigenvalues are equal to zero. 

3. Control structure 

In this work, we consider the control structure shown in Figure 1. In this structure, the 
economic optimization stage is dedicated to the calculation of the (stationary) desired target, 
u desk , for the input manipulated variables. This stage may be based on a rigorous stationary 
model and takes into account the process measurements and some economic parameters. In 
addition, this stage works with a smaller frequency than the low-level control stage, which 
allows a separation between the two stages. In the zone control framework the low-level 
control stage, given by the MPC controller, is devoted to guide the manipulated input from 
the current stationary value u ss to the desired value given by the supervisory economic 
stage, u desk , while keeping the outputs within specified zones. In general, the target UdesM 
will vary whenever the plant operation or the economic parameters change. If it is assumed 
that the system is currently at a stationary value given by (u ss ,y ss ), the desired target Ud eS/ k 
should satisfy not only the input constraints 

W min - U des,k - W max 



but also the output zone condition 



(3) 



344 



Robust Control, Theory and Applications 



where u min and w max represent the lower and upper bounds of the input, y min and y max 
represent the lower and upper limits of the output, B s (0 n ) is the gain corresponding to a 
given model n , and x s n (k) is the estimated steady-state values of the output 
corresponding to model n . Note that in the control structure depicted in Figure 1, as the 
model structure adopted here has integral action, the estimation of component x s n (k) tends 
to the measured output at steady state for all the models lying in Q, which means that 
x s n (k) = y ss if the system is at steady state (See Gonzalez and Odloak 2009 for details). 
Taking into account this fact, equation (3) can be rewritten as 



ymin^ BS {®n) U des,k +d n,ss^yn 



n = l,-,L, 



(4) 



where d n/SS = x s n (k) - B s (® n )u ss = y ss - B s (<5> n )u ss is the output bias based on the comparison 
between the current actual output at steady state and the current predicted output at steady 
state for each model. In other words, B s (0 )u d k + d can be interpreted as the corrected 

k _ 

output steady state. Note that, since u ss =y\Auij) , for a large k, the term B s (0 n )u ss 

7=0 

represents the output prediction based only on the past inputs. 











economic 




i 


r 




parameter 5 




Economic 
Optimization 












m 




4 


input and 


i 






output range: I 

if IIiTIi ■' if IlLtT t 


Rjobust 
MFC 








^ 






w 




t~~ 




} 


£u(k) 






\{k) 




m 


System 




measured 


Observer 




disturbances 








unmeasured 


-l{k) 






disturbances 






Observer 


a — 























Fig. 1. Control structure. 

Based on the later concepts, it is possible to define two input feasible sets for the stationary 
desired target Ud es ,h The first one is the global input feasible set S = [u : w min < u < w max } , 
which represents a box-type set. In addition, it is possible to define the more restricted input 
feasible set & u , which is computed taking into account both, the input constraints and the 
output limits: 



Robust Model Predictive Control for Time Delayed Systems 

with Optimizing Targets and Zone Control 345 



and y min < B s (<9 M ) u - B s (& n )u ss + y ss < y n 



(5) 



This set, which depends on the current stationary point given by ( u ss ,y ss ), is the intersection 
of several sets, each one corresponding to a model lying in set Q. When the output zones are 
narrow, the restricted input feasible set is smaller than the global feasible set, defined solely 
by the input constraints. An intuitive diagram of the input feasible set is shown in Figure 4, 
where three models are used to represent the uncertainty set. In the following sections it will 
be shown that the proposed controller remains stable and feasible even when the desired 
input target u deS/k is outside the set S u , or the set S u itself is null. 

4. Nominal MPC with zone control and input target 

One way to handle the zone control strategy, that is, to maintain the controlled output inside 
its corresponding range, is by means of an appropriate choice of the output error 
penalization in the conventional MPC cost function. In this case the output weight is made 
equal to zero when the system output is inside the range, and the output weight is different 
from zero if the output prediction is violating any of the constraints, so that the output 
variable is strictly controlled only if it is outside the feasible range. In this way, the closed 
loop is guided to a feasible steady state. In Zanin et al. (2002), an algorithm assigns three 
possible values to the output set points used in the MPC controller: the upper bound of the 
output feasible range if the predicted output is larger than the upper bound; the lower 
bound of the output feasible range if the predicted output is smaller than this lower bound; 
and the predicted output itself, if the predicted output is inside the feasible range. However, 
a rigorous analysis of the stability of this strategy is not possible even when using an infinite 
output horizon. Gonzalez et al. (2006) describe a stable MPC based on the incremental 
model defined in (1) and (2), that takes into account a stationary optimization of the plant 
operation. The controller was designed specifically for a heat exchanger network with a 
number of degrees of freedom larger than zero. In that work, the mismatch between the 
stationary and the dynamic model was treated by means of an appropriate choice of the 
weighting matrices in the control cost. However, stability and offset elimination was assured 
only when the model was perfect. 
Based on the work of Gonzalez et al (2006), we consider the following nominal cost function: 

Vk = Z {(y(* + i I k ) - Vs V ,k ) Q y {y( k + i\ k )- Vs V ,k ) + (*(* + / 1 *) - *w ) T Q u 

(6) 

m-1 

(u(k + j\k)-u deS/k j\+ZAu(k + j\k) T RAu(k + j/k) 

j=Q 

where Au(k + j\k) is the control move computed at time k to be applied at time k+j, m is the 
control or input horizon, Q ,Q U ,R are positive weighting matrices of appropriate dimension, 
y sp/ k and Ud es ,k are the output and input targets, respectively. The output target y $v ,k becomes a 
computed set point when the output has no optimizing target and consequently the output is 
controlled by zone. This cost explicitly incorporates an input deviation penalty that tries to 
accommodate the system at an optimal economic stationary point. 



346 



Robust Control, Theory and Applications 



In the case of systems without time delay the term corresponding to the infinite output error 
in the cost Vk is divided in two parts: the first goes from the current time k to the end of the 
control horizon, k+m-1; while the second one goes from time k+m to infinity. This is so 
because beyond the control horizon no control actions are implemented and so, considering 
only the state at time k+m, the infinite series can be reduced to a single terminal cost. In the 
case of time delayed systems, however, the horizon beyond which the entire output 
evolution can be predicted by a terminal cost is given by k+p. As a result, the cost defined in 
(6) can be developed as follows 



v k = I {(y(* + /' I *) - Vs P ,k ) Q y (y(* + ; I *) - y„,* ) 

;=0 

CO J 

+ T\{y( k+ p + J\ k )-ys P ,k) Q y {y(k+p+j\k)-y sp , k ) 

;=0 

(u(k + j/k)- u deSfk ) Q u (u(k + j/k)- u deS/k )} 



+ J] zlw(fc + j I k) T RAu(k + j / k) 

;=0 

The first term on the right hand side of (7) can be developed as follows 

v k,i = Z {{y( k + ; I fc ) - Vfc ) Q y (y(* + ; I fc ) - y sp ^ ) 



(7) 



where 



v k,i ={yk- ~ l v y sp ,k ) Q y {yk - i y y sp ,k ) 



Vk'- 





y k =N x x(k) + SAu k 


V(k\k) 1 




y(fc + l|fc) 


. N x =[l (p+1)nv 0] 6 «C +1 )"^;S = 


y(fc + p|fc)_ 





(8) 







s 2 s 1 



s p s p-l 







c 

J p-m+l 



I =[l ••• I T J e<R (p+1); 



111/ X IT}/ 



Q y = diag 



Qy 



Qy 



p+1 



nx = (p + l)ny + ny + nd 



Consequently, considering (8), the term V kl can be written as follows 



Robust Model Predictive Control for Time Delayed Systems 
with Optimizing Targets and Zone Control 



347 



\l = [X*W + SAu k - ^sr^ Qy [M x x(k) + SAu k - I y y sprk ] 



(9) 



The term corresponding to the infinite horizon error on the system output in (7) can be 
written as follows 



OO ry 

v kr2 = 1 E{y( k+ p + i\ k )-y S p,k) Q y {y(k+p+j\k)-y sp , k ) 

OO ry 

V kil = Y J {x\k + m\k) + ^{ V + j-m)x d {k + m\k)-y SVik ) Q y 

7=1 

(x s (k + m\k) + *F(p + j-m)x d (k + tn\k)-y spfk ) 
where, x s (k + m\k) = x s (k) + B s Au k ,B S = B s ••• B s 



(10) 



Also, 



Au k =\Au(k/kf ••• Au(k + m-l/kfJ eK mMU 

x\k + m\k) = F m x\k) + B d Au k ,B d = \F m - 1 B d F m ~ 2 B d ••• B° 



W{p + j-m) = Y(p - m)P 



(11) 



In order to force V k ,i to be bounded, we include the following constraint in the control 
problem 

x s (k + m | k) - y SVrk = or x s (k) + B s Au k - y sp/k = 
With the above equation and (11), Eq. (10) becomes 

V k/2 =^(*F(p-m)F j x d (k + m\ky) Q y (w(p-m)F j x d (k + m\k) ) j 
7=1 



V k>2 = f [ F m x d (k) + B d Au k ) T Q d (F m x d (k) + B d Au k ) 



where 



Q d =T(np-™)F j ) Q^ np - m )FJ) 

7=1 

Finally, the infinite term corresponding to the error on the input along the infinite horizon in 
(7) can be written as follows 



V k,3=H{ U ( k + J\ k )- U des,k) Qu{ U ( k + J\ k )- U des,k) 
7=1 



(12) 



348 



Robust Control, Theory and Applications 



Then, it is clear that in order to force (12) to be bounded one needs the inclusion of the 
following constraint 



u(k + m\k)-u deS/k =0 



<k-l) + ~I T u Au k -u des>k =0 



(13) 



where J T = 



••• L 



Then, assuming that (13) is satisfied, (12) can be written as follows 

Vk,3 = (*««(* " 1) + MAu k - I u u deSrk f Q u (l u u(k - 1) + MAu k - I u u deSrk ) 



nu 


• 


•• 












nu 


nu 


•• 


; Qu 


= diag 


Qu 




Qu 


nu 


nu 


nu 






V 


111 





where M - 



Now, taking into account the proposed terminal constraints, the control cost defined in (7) 
can be written as follows 



V k = [N x x(k) + SAu k - I y y SV/k ] Q y [N x x(k) + SAu k - I y y sp/k _ 
+(F m x d (k) + B d Au k f Q d (F m x d (k) + B d Au k ) 
+( I u u{k - 1) + MAu k - I u u deS/k f Q u [l u u{k - 1) + MAu k - I u u deS/k ) + Au T k RAu k . 

To formulate the IHMPC with zone control and input target for the time delayed nominal 
system, it is convenient to consider the output set point as an additional decision variable of 
the control problem and the controller results from the solution to the following 
optimization problem: 



min V k = Au[HAu k + 2c T f Au k 
A u k ,y spik 



subject to 



"(fc-i) + ^")t-"<fea = o 




(14) 


x s (k) + &Au k -y sp)k =0 




(15) 


ymin — Jsp,k — i/max 




(16) 


ax <Au(k + j\k)<Au max 7 = 0,1,- 


• ,ra-l 




<u(k-l) + ^Au(k + i\k)<u max ; 7 = 0,1,. 

z=0 


• ,m-\ 





Robust Model Predictive Control for Time Delayed Systems 

with Optimizing Targets and Zone Control 349 

where 

H = S T Q y S + B dT Q d B d + M T Q U M + R 

c] = x(k) T N x T Q y S + x d (k) T (F m ) T Q d B d + (u(k - 1) - u des f I T U Q U M 

Constraints (14) and (15) are terminal constraints, and they mean that both, the input and 
the integrating component of the output errors will be null at the end of the control horizon 
m. Constraint (16), on the other hand, forces the new decision variable y $v ,k to be inside the 
zone given by y m i n and y max . So, as y sv ,k is a set point variable, constraint (16) means that the 
effective output set point of the proposed controller is now the complete feasible zone. 
Notice that if the output bounds are settled so that the upper bound equals the lower bound, 
then the problem becomes the traditional set point tracking problem. 

4.1 Enlarging the feasible region 

The set of constraints added to the optimization problem in the last section may produce a 
severe reduction in the feasible region of the resulting controller. Specifically, since the input 
increments are usually bounded, the terminal constraints frequently result in infeasible 
problems, which means that it is not possible for the controller to achieve the constraints in 
m time steps, given that m is frequently small to reduce the computational cost. A possible 
solution to this problem is to incorporate slack variables in the terminal constraints. So, 
assuming that the slack variables are unconstrained, it is possible to guarantee that the 
control problem will be feasible. Besides, these slack variables must be penalized in the cost 
function with large weights to assure the constraint violation will be minimized by the 
control actions. Thus, the cost function can be written as follows 



v k 



V T 

-■T,{y^ + j\^)-y S p f k- s y f k) Q y {y{ k + i\ k )-ys V ,k- 5 y,k) 



;=0 

CO J- 

+Y l {y( k+ p + i\ k )-ys?,k- s y,k) Q y (y{ k +p+i\ k )-y sp ,k-Sy, k )+ 

m-1 T 

+ Ys{ U ( k + J\ k )- U des,k-<5 u ,k) Qu{u(k + J\k)-Udes,k-<5u,k) ^ 

GO 

+^(u(k + m + j\k)-u deS/k -S U/k ) Q u (u(k + m + j\k)-u deSfk -S U/k ) + 

7=0 

ra-l 

+ X M* + ; I k) T RAu(k + j\k) + s T yik s y s yik + s u /s u s Utk 

where S , S u are positive definite matrices of appropriate dimension and 
S k <Eyi ny , S Ufk e$i nu are the slack variables (new decision variables) that eliminate any 
infeasibility of the control problem. Following the same steps as in the controller where 
slacks are not considered, it can be shown that the cost defined in (17) will be bounded if the 
following constraints are included in the control problem: 

x s {k) + B s Au k -y SVrk -S yrk =0 



350 



Robust Control, Theory and Applications 



u{k-l) + I l u Au k - Ude$ik -S Uik =0 (18) 

In this case, the cost defined in (17) can be reduced to the following quadratic function 



where 



v k = [ Au l yl v ,k 5 ]m 5 l,\ 



H n H 12 H 13 H 14 

H 21 H 22 H 23 

H 31 H 32 H 33 

H 41 H 44 



*y,* 

S u,k 



+2 [ c /4 



C /,2 c /,3 C / 



J 



y s? a 



'■'/i ; /i' 



H u = S T Q V S + (B d ) T Q d B d + M T Q U M + R 



H 12 = H 2 T a = -S T Q y I y , H 13 = H 3 T a = -S T Q y I y , H 14 = H 4 T a = -M T Q J M 

H 22 = *« QA ' H 23 = H 32 = l y Qy l y ' H 33 = l y Qy l y + S y / H 44 = *u QA + S u 



H 24 = H 42 = H 34 = H 43 = 



Tit a 



c f/1 =x(k) T N T x Q v S + x d (k) T (F m ) T Q d B d m +(u(k-l)-u des y I T U Q U M 



c fi2 = -x(k) T N T x Q± , c f * = -x(k) T N T x QJ., 



y y 



c fA = -(u(k - 1) - u deS/k ) llQ u I u 



c = x(k) T NlQ y NXk) + x d (k) T (F m ) T Q d F m x d (k) + (u(k-l) I T u Q u I u (u(k-l)-u deS/k ) 

Then, the nominally stable MPC controller with guaranteed feasibility for the case of output 
zone control of time delayed systems with input targets results from the solution to the 
following optimization problem: 
Problem PI 

min V k 

Au k'Vs V ,k' 
5 y,k' 5 u,k 

subject to: 



-Au max <Au(k + j\k)<Au n 



j = 0,l,—,m-l 



Robust Model Predictive Control for Time Delayed Systems 

with Optimizing Targets and Zone Control 351 



w min ^-l) + £zlu(/c + z|/c)<u max ; j = Q,l,'~,m-l 



j 

I 

ymm^ysp,k^ymax ( 19 ) 



x s (k) + B s Au k -y spfk -S ¥fk =0 (x s (k + m\k)-y spfk -S ¥/k =0) 

u(k - 1) + I T U Au k - u deS/k - S U/k = (u(k + m-l\k)- u deS/k - 8 u>k = 0) 

It must be noted that the use of slack variables is not only convenient to avoid dynamic 
feasibility problems, but also to prevent stationary feasibility problems. Stationary feasibility 
problems are usually produced by the supervisory optimization level shown in the control 
structure defined in Figure 1. In such a case, for instance, the slack variable 8 k allows the 
predicted output to be different from the set point variable y k at steady state (notice that 
only y k is constrained to be inside the desired zone). So, the slacked problem formulation 
allows the system output to remain outside the desired zone, if no stationary feasible 
solution can be found. 

It can be shown that the controller produced through the solution of problem PI results in a 
stable closed loop system for the nominal system. However, the aim here is to extend this 
formulation to the case of multi model uncertainty. 

5. Robust MPC with zone control and input target 

In the model formulation presented in (1) and (2) for the time delayed system, uncertainty 
concentrates not only on matrices F, B s and B d as in the system without time delay, but also 
on matrix e yi n y xnu that contains all the time delays between the system inputs and 
outputs. Observe that the step response coefficients Si,...,S p +i, which appears in the input 
matrix and ^(p + 1), which appears in the state matrix of the model defined in (1) and (2) 
are also uncertain, but can be computed from F, B s , B d and 6 . Now, considering the multi- 
model uncertainty, assume that each model is designated by a set of parameters defined as 
© n = \B s n ,B n ,¥ n ,6 n \ , n = l,...,L. Also, assume that in this case p >max# n (z,;) + ra (this 
condition guarantees that the state vector of all models have the same dimension). Then, for 
each model 6> , we can define a cost function as follows 



;=0 
+ T,{yn( k + P + i\ k )-y S p,k( n)- S y,k( n)) Qy{y n ( k + V + J \ k ) -y S p,k( n) ~ ^y,k{ n)) 



m-\ j, 

+ ^(u(k + j\k)- u deS/k - S U/k ) Q u (u(k + j\k)- u deS/k - S U/k ) 

;=0 

00 

+ ^(u(k + m + j\k)-u deSrk -S U/k ) Q u (u(k + m + j\k)-u deS/k -S U/k ) 

;=0 
m-\ 

+ X Au{k + j | k) T RAu(k + j\k) + Sl k (0 n )S y S yik (0 n ) + 8jS u 8 uM 



(20) 



352 



Robust Control, Theory and Applications 



Following the same steps as in case of the nominal system, we can conclude that the cost 
defined in (20) will be bounded if the control actions, set points and slack variables are such 
that (18) is satisfied and 

x s (k) + B s (0 n )Au k - yspik (0 n )-S yik (0 n ) = O 
Then, if these conditions are satisfied, (20) can be written as follows 

M&n) = {NXV + S(®„)Au k - I y y sp , k {0 n )-l y 8 yfk {0 n )f Q y 

(N x x(k) + ~S(@n)Au k - l y y sv , k {0 n ) - I y S yM {0 n )) 
+ (F(0Xx\k) + Bi(0 n )Au k fQ d (0 n )(F(0 n ) m x d (k) + Bi(0 n )Au k ) (21) 

+ (l«"(fc " 1) + MAu k - I u u deS/k - I u S Uik ) Q u (!„«(* - 1) + MAu k - I u u deSfk - I u 8 U/k ) 
+ Au T k RAu k + S yM {0 n ) T S y S y , k {0 n ) + 8jS u 8 Ufk 



V k (&„) = [M y T SV:k {0 n ) 8] ik {0 n ) 8 T Uil 



H n (0 n ) H 12 (0„) H 13 (0 n ) H u 

H 2l( n) H ll W 23 ° 

^31 (<§>„) H 32 W 33 ° 



■2[c /4 K) 



L f,2 L f,3 L fA 



AUp 



Vs Vf k{ n) 
#y,k(®n) 



J u,k 



H A 



-C(&n) 







H A 



Auu 



y sp A n) 

ty, k (®n) 



iJ n,)< 



H 11 (© n )=~S(©jQ~S(© n ) + (B\© n )) T Q d (©^ 



H 12 =Hi 1 =-S(© n ) 1 QJ / H 13 =H> 1 =-S(© n ) 1 Q v I v , H^Hi^-M 1 Q U I U 



c v ' y 



TV 



H 22 = l y Qy l y > H 23 = H 32 = l y Qy l y > H 33 = l y Qy l y 



H 24 = H 42 = H 34 = H 43 = 



c f/1 = x(k) T N T x Q y S(© n ) + x d (kf(F(© n r fQ xd B\© n ) + (u(k - 1) - u des f I T U Q U M 



\Txtt; 



\T^rTr 



c f/2 = -x(ky N x Q y I y , c f/3 = -x(kY N x Q y I y 
c fA =-(u(k-l)-u des fl T u Q u I u 



Robust Model Predictive Control for Time Delayed Systems 

with Optimizing Targets and Zone Control 353 

c = x{k?N T x Q y N x x{k) + x d {k) T {¥{0 n ) m ) T Q xd ¥{0 n )"'x d {k) + 



-(u(k - 1) - u deSik f 1 T U QJ U (u(k - 1) - u deSik ) 



Then, the robust MPC for the system with time delay and multi-model uncertainty is 
obtained from the solution to the following problem: 
Problem P2 



AUk>y 8 p,k( & n)> S y,k( & n)> S u* 

n=l,...,L 



subject to 



"^max < M k + j\ k )^ ^max J = 0,l,~',m-l 

i 

Wmin^^fc-lJ + X^^ + ^I^^Wmax; 7 = 0,1,-,"* -1 

ymm^ y S p,k( n)^y ma x', n = i,---,L 

x s (k) + B s (0 n )Au k -y sp/k (0 n )-S y/k (0 n ) = O; n = l,-,L (23) 

u(k-l) + llAu k -u deSfk -S Ufk =0 (24) 

Vk(^k>8y,k{®n)A,k>y8p,^ n = l,...,L (25) 

where, assuming that (zl^_ 1 ,y sprfc _ 1 (6> n ),^^_ 1 ,^ fc _ 1 (6> n )) is the optimal solution to Problem 

P2 at time step k-1, we define 

Au k =[Au(k\k-l) T ». zl M *(fc + m-2|fc-l) T 0] T ; jf^(© II ) = y^ l _ 1 (© II ) and ^ such 

that 

«(* - 1) + F u Au k - UdesM - 8 Utk = (26) 

and define 5 k {0 n ) such that 

x s (k) + B s (0 n )Au k -y sp:k (0 n )-S V)k (0 n ) = O (27) 

In (20), <9 N corresponds to the nominal or most probable model of the system. 

Remark 1: The cost to be minimized in problem P2 corresponds to the nominal model. 
However, constraints (23) and (24) are imposed considering the estimated state of each 
model n e Q . Constraint (25) is a non-increasing cost constraint that assures the 
convergence of the true state cost to zero. 

Remark 2: The introduction of L set-point variables allows the simultaneous zeroing of all 
the output slack variables. In that case, whenever possible, the set-point variable y sPfk (® n ) 



354 Robust Control, Theory and Applications 

will be equal to the output prediction at steady state (represented by x s n (k + m) ), and so the 

corresponding output penalization will be removed from the cost. As a result, the controller 

gains some flexibility that allows achieving the other control objectives. 

Remark 3: Note that by hypothesis, one of the observers is based on the actual plant model, 

and if the initial and the final steady states are known, then the estimated state x T (k) will 

be equal to the actual plant state at each time k. 

Remark 4: Conditions (26) and (27) are used to update the pseudo variables of constraint 

(25), by taking into account the current state estimation x s n (k) for each of the models lying 

in Q , and the last value of the input target. 

One important feature that should have a constrained controller is the recursive feasibility 

(i.e. if the optimization problem is feasible at a given time step, it should remain feasible at 

any subsequent time step). The following lemma shows how the proposed controller 

achieves this property. 

Lemma. If problem P2 is feasible at time step k, it will remain feasible at any subsequent 

time step k+j, 7=1,2, . . . 

Proof: 

Assume that the output zones remain fixed, and also assume that 

Au k =lAu(k\kf ••• Au(k + m-l\kfJ e^ mMU , (28) 

yl v A®i)>---,ys V A®L)> ^(6> 1 ),-,^(6> L ) and S* Uik (29) 

correspond to the optimal solution to problem P2 at time k. 

Consider now the pseudo variables (Au k+1/ y spM1 (0 1 ) / --- / y spM1 (& L ) / ^ +1 (6> 1 ),..., 

Au k+1 =lAu(k + l\kf ... Au(k + m-l\kf ol (30) 

y S p,k + i(®n) = yl P ,k(®n)> n = l,---,L, (31) 

Also, the slacks S U/k+1 and S y/k+1 (0 n ) are such that 

u(k) + I T U Au k+1 - u deSfk - § uM1 = (32) 



and 



x s n (k + l) + B s (0 n )Au k+1 -y spM1 (0 n )-S V/k+1 {0 n ) = O, n = \ L (33) 



We can show that the solution defined through (30) to (33) represent a feasible solution to 

problem P2 at time k+1, which proves the recursive feasibility. This means that if problem 

P2 is feasible at time step k, then, it will remain feasible at all the successive time steps k+1, 

fe+2 / ...D 

Now, the convergence of the closed loop system with the robust controller resulting from 

the later optimization problem can be stated as follows: 



Robust Model Predictive Control for Time Delayed Systems 

with Optimizing Targets and Zone Control 355 

Theorem. Suppose that the undisturbed system starts at a known steady state and one of the 
state observers is based on the actual model of the plant. Consider also that the input target 
is moved to a new value, or the boundaries of the output zones are modified. Then, if 
condition (3) is satisfied for each model n e C2 , the cost function of the undisturbed true 
system in closed loop with the controller defined through the solution to problem P2 will 
converge to zero. 
Proof: 

Suppose that, at time k the uncertain system starts from a steady state corresponding to 
output y(k) = y ss and input u(k-l) = u ss . We have already shown that, with the model 
structure considered in (1) and (2), the model states corresponding to this initial steady state 
can be represented as follows: 



*»(*) = 



Vss ■•• Vss Vss ° 



, n = l,-,L 



V 

and consequently, x s n {k) = y ss , x„(k) = 0, n = l,-~,L. 

At time k, the cost corresponding to the solution defined in (28) and (29) for the true model 

is given by 

CO J 1 

Vk{^) = U(yT(k + j\k)-y sM (& T )-Sl k (0 T )) QjyAk + jm-y^i^-S;^)) 

+ (u\k + j\k)-u deSik -5' Uik ) Q u (u(k + j\k)-u deS/k -5* Urk )} (34) 

m-1 

+ ^Au\k + j\k) T RAu\k + j\k) + S;/(0 T )S/ y/k {0 T ) + S:/sX /k 

7=0 

At time step k+1, the cost corresponding to the pseudo variables defined in (30) to (33) for 
the true model is given by 

Z\(yT(k+j+i\k)-y; Prk (0 T )-s; rk (0 T )) Q y (i T (k + j + i\k)- y ; Prk (0 T )-s; rk (0 T )) 

+ (u* (k + j + 1 / k)-u deS/k -S* U/k ) Q u (u\k + j + l/k)-u deS/k -Sl /k )} 

m-\ 
7=0 

Observe that, since the same input sequence is used and the current estimated state 
corresponding to the actual model of the plant is equal to the actual state, then the predicted 
state and output trajectory will be the same as the optimal predicted trajectories at time step 
k. That is, for any j ; > 1 , we have 

x T (k + j\k + l) = x T (k + j\ k) 



356 



Robust Control, Theory and Applications 



and 



y T (k + j\k + l) = y T (k + j\k) 



In addition, for the true model we have 8 k+1 {© T ) = 8 k (& T ) and 8 U/k+1 = S U/k . However, 
the first of these equalities is not true for the other models, as for these models we have 
x n (k + l\k + l)*x n (k + l\k), for <9 n *<9 T . 
Now, subtracting (35) from (34) we have 

Vk (*r ) - % + i (®t ) = {yr(k\k)- y SVfk (6> T ) - 8] M (6> T )f Q y (y T (k\k)- y sv>k (<9 T ) - 8] >k (6> T )) 
+{u\k\k)-u deSik -8* Uik ) Q u (u\k\k)-u deSfk -S* Ufk ) + Au(k) T RAu(k) 
and, from constraint (25), the following relation is obtained 

4K)^ W (0 T ), 

which finally implies 

Vk (@t ) " Vii (®r )>{y r (k\k)- y ; a (<9 T ) - Sl k (0 T )) T Q y (y T {k\k)- y\ ?x (0 T ) - S* yX (0 r )) 



+(u\k\k)-u deSik -S' U/k ) Q u (u(k\k)-u ieS/k -S* U/k ) + Au(k) T RAu{k) 



(36) 



Since the right hand side of (36) is positive definite, the successive values of the cost will be 
strictly decreasing and for a large enough time k , we will have \V^{© T )-V^ +1 {0 T j\ = O , 
which proves the convergence of the cost. 
The convergence of V k (& T ) means that, at steady state, the following relations should hold 



At steady state, the state is such that 



zto*(fc) = 



y(*) 



yik) 

y(k) 
o 



t n (fc)= y( k +p) 

x s n (k) 

K{k) 

where y(k) is the actual plant output. Note that the state component x n (k) is null as it 
corresponds to the stable modes of the system and the input increment is null at steady 
state. Then, constraint (23) can be written as follows: 



yik) 



Robust Model Predictive Control for Time Delayed Systems 

with Optimizing Targets and Zone Control 357 

s lk( n)=yn{k\k)-y sp r k ( n) = y{k)-y sp r k (0n), n=i,...,i. (37) 

This means that, if the output of the true system is stabilized inside the output zone, then 
the set point corresponding to each particular model will be placed by the optimizer 
exactly at the output predicted values. As a result, all the output slacks will be null. On 
the other hand, if the output of the true system is stabilized at a value outside the output 
zone, then the set-point variable corresponding to any particular model will be placed by 
the optimizer at the boundary of the zone. In this case, the output slack variables will be 
different from zero, but they will all have the same numerical value as can be seen from 
(37). 

Now, to strictly prove the convergence of the input and output to their corresponding 
targets, we must show that slacks S u t and 8 k (®j) will converge to zero. It is necessary at 
this point to notice that in the case of zone control the degrees of freedom of the system are 
no longer the same as in the fixed set-point problem. So, the desired input values may be 
exactly achieved by the true system, even in the presence of some bounded disturbances. Let 
us now assume that the system is stabilized at a point where, 8 k -(& 1 ) = --- = S ^(<9 L )^0, 
and S u t ± . In addition, assume that the desired input value is constant at u deSfk . Then, at 
time k large enough, the cost corresponding to model n will be reduced to 

^ft) = <i^)VM( »i) + <M,P n = l,...,L, (38) 

and constraints (21) and (22) become, 

xW)-y sp rk( «) = S yA & «)> n = l,-,L (39) 

and u{k-l)-u desrk =8 urk . 
Since x s n (k) = y(k), n = l,---,L ,Eq. (39) can be written as 

Now, we want to show that if u(k -l) and u des k are not on the boundary of the input 
operating range, then it is possible to guide the system toward a point in which the slack 
variables S k (0 n ) and S U/k are null, and this point have a smaller cost than the steady state 
defined above. Assume also for simplicity that m=l. Let us consider a candidate solution to 
problem P2 defined by: 

Au(k/k) = u desk --u(k-l) = -S uk - (40) 

and 

ys P A n)=y{k)-v s (0nK- k »-i l («) 

Now, consider the cost function defined in (21), written for time step k and the control 
move defined in (40) and the output set point defined in (41): 



358 



Robust Control, Theory and Applications 



%{& n ) = (i y y(k) - s,(&„Kt - i y y sp ,-k( n)- h s v ,- k ( n)) Qy 

(l y y(k) - S x {& n )8 uX -l y y spX {0 n )- IyS yX (0 n )) 
+ {F(0j t x\k)-B\0 n )S u ^QA0 n ){H0„) m x i {k)-B i {0 n )S uM ) 

+ {i„ u ( k - 1) - MS u,k ~ iuU ies ,k - 'A,* ) Q U ( J«"( fc - 1) - ms u1 - - I u u deS/k - l u s U/k ) 



-(-<? V ) T R(S A + S T (0 n ) T S v 5 V (0„) + S t T S u S t 

v u,k ' v u,k ' y,k \ n/ y y,fc \ «/ «,fc u u,k 



Now, since the solution defined by \Au(k/k),S t{& ),S ?-] satisfies constraint (23) and 
(24), the above cost can be reduced to 



V- k (®n) = €,An(&nK k - 



where 



S^ in (& n ) = p y F (0„ ) - S, (0„ )J Q y [l y B s («9„ ) - S, (<9„ )] + B d (0 n f Q d B d (0 n ) + R 



Then, if 



S„>Sr((9„), n = l,..,L, 



(42) 



the cost corresponding to the decision variables defined in (40) and (41) will be smaller than 
the cost obtained in (38). This means that it is not possible for the system to remain at a point 
in which the slack variables 5 k (& n ) , n = l,'--,L and S U/k are different from zero. 
Thus, as long as the system remains controllable, condition (42) is sufficient to guarantee the 
convergence of the system inputs to their target while the system output will remain within 
the output zones. □ 

Observe that only matrix S u is involved in condition (42) because condition (3) assures that 
the corrected output prediction, i.e. the one corresponding to the desired input values, lies 
in the feasible zone. In this case, for all positive matrices S y , the total cost can be reduced by 
making the set point variable equal to the steady-state output prediction, which is a feasible 
solution and produces no additional cost. However, matrix S y is suggested to be large 
enough to avoid any numerical problem in the optimization solution. 

Remark 5: We can prove the stability of the proposed zone controller under the same 
assumptions considered in the proof of the convergence. Output tracking stability means 
that for every />0 , there exists a p(y) such that if |x T (0)| < p , then |jc r (/c)|</ for all 
k > ; where the extended state of the true system x T (k) may be defined as follows 



x T (k) = 



y T (k\k)-x s T (k) 

y T (k + p\k)-x s T (k) 
*r( k )-y^-i(®r) 

4(k) 



Robust Model Predictive Control for Time Delayed Systems 

with Optimizing Targets and Zone Control 359 

To simplify the proof, we still assume that m=l, and suppose that the optimal solution 
obtained at step k-l is given by Au k _ 1 = Au (k-l / k-l) , yl p , k - 1 (®i),''',y* 8p , k -i(@L)> 
s ]m-\{ \)''"' 5 ]m-\{ l) and ^_i • 
A feasible solution to problem P2 at time k is given by: 

M k = ° / Vs V ,k (®n ) = ¥*sp,k-l (®n ) / and K,k and $y,k {®n ) are such that 

u{k-l) + I T u Au k -u deSik -S Uik =Q (43) 



^W + B s (0„)^-y sM (<9„)-^(0„) = O, n = \ L. (44) 

Since Au(k \ k) = , we have u(/c | /c) = u(/c - 1) and from (43) we can write 

Kk= u i k \ k )- U des,k (45) 

For the true system, (44) can be written as follows 

4(k)-yl P , k -i(0 T )-s yrk (0 T ) = o 

and consequently, we have the following relations 

^(<S > r) = 4W-y S M-iK) (46) 



and 



m = Ss P ,k-i(®r) + £ v A T) (47) 



For the feasible solution defined above, the cost defined in (21) can be written for the actual 



model T as follows 



+ (F((9 T rx^W) T Q x ,((9 T )(F((9 T rx^(fc)) 

+ (l u u(k - 1) - I u u deS/k - I u S U/k ) Q u (l u u(k - 1) - I u u deSfk - I u S U/k ) 

Now, using (45), (46) and (47) the cost defined in (48) can be reduced to the following 
expression 

V k (&r) = x T (k) T jc[Q y C a + Cj (F(<9 T f f Q d (<9 T )(F(<9 T f )c 2 + C 3 T S y C 3 + C 4 T S„Q 
where 

1 — I (p+l)ny (p+l)nyxny (p+l)nyxnd (p+l)nyxnuj 



360 



Robust Control, Theory and Applications 



^2 [yndx(p+l)ny ^ndxny *nd "ndxnu \ 



3 I nyx(p+l)ny ny nyxnd nyxnu 



} 



Q - \_^nux(p+l)ny ® 



nuxny ^nuxnd 



Thus, the cost defined in (48) can be written as follows: 



Inn] 



V 2/k (0 T ) = x T (k) T H 1 (0 T )x T (k), (49) 

where H a = C T 1 Q y C 1 +C T 2 (F(0 T ) m f Q xd (0 T )(F(0 T ) m )c 2 + C T 3 Sf 3 + ClS u C 4 . 

Because of constraint (25), the optimal true cost (that is, the cost based on the true model, 
considering the optimal solution that minimizes the nominal cost at time k) will satisfy 



v;(0 T )<v k (0 T ). 



(50) 



and 



V* + »(»r)*V*(»r)foranyn>l. 



(51) 



By a similar procedure as above and based on the optimal solution at time k+n, we can find 
a feasible solution to Problem P2 at time k + n + 1, for any w>l, such that 



{ {0 T )<Vl n {0 T ) 



(52) 



and from the definition of K 



we have 



fWi(«r) = »r (* + » + !) H,(0 T )x T (k + n + l) 
Therefore, combining inequalities (49) to (52) results 

x T (k + n + lf H 1 (0 T )x T (k + n + l)<x T (kf H 1 (0 T )x T (k), Vn>l. 
As H a (<9 T ) is positive definite, it follows that 



\k + n + 1)| < a (6> T )|x T (fc)|, Vn > 1 



where 



i{0 T )- 



4™( H i( t)) 



1/2 



< max 



-UxJHi^-))' 

^in(Hl(0;)) 



1/2 



If we restrict the state at time k to the set defined by hc T (fc)| < /? , then, the state at tine k+n+1 
will be inside the set defined by 



Robust Model Predictive Control for Time Delayed Systems 

with Optimizing Targets and Zone Control 361 

p T (k + n + l)\\<a(0 T )p, Vn>l . 

Which proves stability of the closed loop system, as x T will remain inside the ball 
|x T | < a{© T )p , where oc(® T ) is limited, as long as the closed loop starts from a state inside 
the ball \\x T \\< P ■ Therefore, as we have already proved the convergence of the closed loop, 
we can now assure that under the assumption of state controllability at the final equilibrium 
point, the proposed MPC is asymptotically stable.D 

Remark 6: It is important to observe that even if condition (3) cannot be satisfied by the 
input target, or the input target is such that one or more outputs need to be kept outside 
their zones, the proposed controller will still be stable. This is a consequence of the 
decreasing property of the cost function (inequality (36)) and the inclusion of appropriate 
slack variables into the optimization problem. When no feasible solution exists, the system 
will evolve to an operating point in which the slack variables, which at steady state are the 
same for all the models, are as small as possible, but different from zero. This is an 
important aspect of the controller, as in practical applications a disturbance may move the 
system to a point from which it is not possible to reach a steady state that satisfies (3). When 
this happens, the controller will do the best to compensate the disturbance, while 
maintaining the system under control. 

Remark 7: We may consider the case when the desired input target u deS/k is outside the 
feasible set & u and the case where the set & u itself is null. If & u is not null, the input target 
Udes,k could be located within the global input feasible set & , but outside the restricted input 
feasible set & u . In this case, the slack variables at steady state, S U/SS and 8 ss (<9 n ) , cannot be 
simultaneously zeroed, and the relative magnitude of matrices S y and S u will define the 
equilibrium point. If the priority is to maintain the output inside the corresponding range, 
the choice must be S » S u , while preserving S u > S™ . Then, the controller will guide the 
system to a point in which 8 ss (<9 n ) « 0, n = l,---,L and 8 U/SS * . On the other hand, if 3 U 
is null, that is, there is no input belonging to the global input feasible set S that 
simultaneously satisfies all the zones for the models lying in Q , then, the slack variables 
8 ss {O n ), n = l,'--,L , cannot be zeroed, no matter the value of 8 U/SS . In this case (assuming 
that S y »S U ), the slack variables 8 y /SS (@ n ),n = 1,--,L, will be made as small as possible, 
independently of the value of 8 U/SS . Then, once the output slack is established, the input 
slack will be accommodated to satisfy these values of the outputs. 

6. Simulation results for the system with time delay 

The system adopted to test the performance of the robust controller presented here is based 
on the FCC system presented in Sotomayor and Odloak (2005) and Gonzalez et al. (2009). It 
is a typical example of the chemical process industry, and instead of output set points, this 
system has output zones. The objective of the controller is then to guide the manipulated 
inputs to the corresponding targets and to maintain the outputs (that are more numerous 
than the inputs) within the corresponding feasible zones. The system considered here has 2 
inputs and 3 outputs. Three models constitute the multi-model set Q on which the robust 
controller is based. In two of these models, time delays were included to represent a possible 
degradation of the process conditions along an operation campaign. The third model 
corresponds to the process at the design conditions. The parameters corresponding to each 
of these models can be seen in the following transfer functions: 



362 



Robust Control, Theory and Applications 



G{© 1 ) = 



0.4515 e~ ls 
2.9846s + 1 



1.5e 



-6 s 



0.20336 



1.7187s + 1 
(0.1886s- 3.8087) e~ 3s 
20s + 1 17.7347s 2 + 10.8348s + 1 



1.7455 e~ bs 
9.1085s + 1 



-6.1355 e^ s 
10.9088s + 1 



G(0 2 )- 



0.25 e~ IS 0.135 e~ bs 

3.5s + 1 2.77s + 1 

(0.1886s- 2.8) g" 4s 
25s + 1 19.7347s 2 + 10.8348s + 1 



0.9e" 



1.25 e 



-5s 



11.1085s + 1 



-5e~ 



12.9088s + 1 



G{0 3 ) = 



0.7 



1.98s + 1 

2.3 

25s + 1 
3 



0.5 



2.7s + 1 
0.1886s -4.8087 

15.7347s 2 + 10.8348s + 1 

-8.1355 



7s + l 



7.9088s + 1 



In this reduced system, the manipulated input variables correspond to: u\ air flow rate to the 
catalyst regenerator, m opening of the regenerated catalyst valve, and the controlled outputs 
are the following: \j\ riser temperature, 1/2 regenerator dense phase temperature, 1/3: 
regenerator dilute phase temperature. 

In the simulations considered here, model 1 is assumed to be the true model, while model 
<9 3 represents the nominal model that is used into the MPC cost. In the discussion that 
follows, unless explicitly mentioned, the adopted tuning parameters of the controller are 



m -- 
Si, 



:1(T 



T = l, Q y =0.5* diag(l 1 1), Q u =diag(l l), R = diag(l l), 

v * diag(l 1 1) and S u = 10 5 * diag(l l) . The input and output constraints, as well 



as the maximum input increments, are shown in Tables 1 and 2. 



Output 


1/min 




l/max 


yi (°c) 
y 2 (°c) 
ys(°Q 


510 
600 
530 




530 
610 
590 


Table 1. Output zones of the FCC system 


Input Au max 




Wmin 


Wmax 


Mi (ton/h) 25 

w 2 (%) 25 




75 

25 


250 
101 



Table 2. Input constraints of the FCC system 



Robust Model Predictive Control for Time Delayed Systems 
with Optimizing Targets and Zone Control 



363 



Before starting the detailed analysis of the properties of the proposed robust controller, we 
find it useful to justify the need for a robust controller for this specific system. We compare, 
the performance of the proposed robust controller defined through Problem P2, with the 
performance of the nominal MPC defined through Problem PI. We consider the same 
scenario described above except for the input targets that are not fully included in the 
control problem (we consider a target only to input u\ by simply making Q u = diag(l 0) 
and S u = 10 5 * diag(l 0) . This is a possible situation that may happen in practice when the 
process optimizer is sending a target to one of the outputs. Figures 2 and 3 show the output 
and input responses respectively for the two controllers when the system starts from a 
steady state where the outputs are outside their zones. It is clear that the conventional MPC 
cannot stabilize the plant corresponding to model 1 when the controller uses model <9 3 to 
calculate the output predictions. However, the proposed robust controller performs quite 
well and is able to bring the three outputs to their zones 



550 r 



500 L 



800 r 



10 



15 20 25 30 35 40 45 
time (min) 



50 



>, 600 



400 L 



10 



15 



20 25 30 

time (min) 



35 



40 



45 



50 



1000 r 



CO 



500 



10 



15 



20 25 30 

time (min) 



35 



40 



45 



50 



Fig. 2. Controlled outputs for the nominal ( ) and robust ( ) MPC. 

We now concentrate our analysis on the application of the proposed controller to the FCC 
system. As was defined in Eq. (5), each of the three models produces an input feasible set, 
whose intersection constitutes the restricted input feasible set of the controller. These sets 
have different shapes and sizes for different stationary operating points (since the 
disturbance d n (k) is included into Eq. (5), except for the true model case, where the input 
feasible set remains unmodified as the estimated states exactly match the true states. The 
closed loop simulation begins at u ss = [230.5977 60.2359] and y ss = [549.5011 704.2756 
690.6233], which are values taken from the real FCC system. For such an operating point, the 
input feasible set corresponding to models 1, 2 and 3 are depicted in Figure 4. These sets are 
quite distinct from each other, which results in an empty restricted feasible input set for the 
controller ( & u = & u (& 1 ) fl & u (& 2 ) H $ u (^3 ) )• This means that, we cannot find an input that, 



364 



Robust Control, Theory and Applications 



taking into account the gains of all the models and all the estimated states, satisfies the 
output constraints. 

250, ^ ^ ^ ^ ^ < < < ^ , 



200 



150 L 



5 10 15 20 25 30 35 40 45 50 

time (min) 





80 


- r 


\ 




/ \ 


I / - - I I f - 

/ \ ' 

/ \ 


<N 


60 


> 


V 

\ 




' \ 


— ~ / \ 


=5 


40 
on 






/ \ 


~r \ 

/ \ i 

iii 



5 10 15 20 25 30 35 40 45 50 

time (min) 

Fig. 3. Manipulated inputs for the nominal ( ) and robust ( ) MPC. 

80 




Fig. 4. Input feasible sets of the FCC system 



Robust Model Predictive Control for Time Delayed Systems 
with Optimizing Targets and Zone Control 



365 



The first objective of the control simulation is to stabilize the system input at 
u des - [165 60] . This input corresponds to the output y = [520 606.8 577.6] for the true 
system (& 1 ) , which results in the input feasible sets shown in Figure 5a. In this figure, it can 
be seen that the input feasible set corresponding to model 1 is the same as in Fig. 4, while the 
sets corresponding to the other models adapt their shape and size to the new steady state. 
Once the system is stabilized at this new steady state, we simulate a step change in the 
target of the input (at time step k=50 min). The new target is given by u b des = [175 64] , and 
the corresponding input feasible sets are shown in Figure 5b. In this case, it can be seen that 
the new target remains inside the new input feasible set 3 U , which means that the cost can 
be guided to zero for the true model. Finally, at time step k=100 min, when the system 
reaches the steady state, a different input target is introduced (u c des =[175 58] ). Differently 
from the previous targets, this new target is outside the input feasible set 3 C U , as can be seen 
in Figure 5c. Since in this case, the cost cannot be guided to zero and the output 
requirements are more important than the input ones, the inputs are stabilized in a feasible 
point as close as possible to the desired target. This is an interesting property of the 
controller as such a change in the target is likely to occur in the real plant operation. 





KW 




S\ 




^v 






■ «L 


J)y 




* 




/A£ 


u 


w - 


V"*-^. 


~K{d 2 ) 








120 130 140 150 160 170 180 190 200 210 120 130 140 150 160 170 181 

u1 u1 



190 200 210 




Fig. 5. (a): Initial input feasible sets; (b): Input feasible sets when the first input target is 
changed; and (c): Input feasible sets when the second input target is changed. 



366 



Robust Control, Theory and Applications 



550 














\ 






V^^ 












>, 
















500 
( 






D 


50 


time (min) 


100 


150 


700 
<N 650 


- V 








- 




600 


_/L 











( 

700 
<2> 600 


3 


50 


time (min) 


100 


150 


1_. 


















>> 










i \y^~ 










Rnn 







50 100 150 

time (min) 

Fig. 6. Controlled outputs and set points for the FCC subsystem with modified input target. 
240 
220 \ 

- 200 \ 
180 \ 
160 ^ 



50 100 

time (min) 



150 



T 

7CH \ 



2 



60 



50 L 



50 



100 



150 



time (min) 



Fig. 7. Manipulated inputs for the FCC subsystem with different input target. 

Figure 6 shows the true system outputs (solid line), the set point variables (dotted line) and 
the output zones (dashed line) for the complete sequence of changes. Figure 7, on the other 
hand, shows the inputs (solid line), and the input targets (dotted line) for the same 
sequence. As was established in Theorem 1, the cost function corresponding to the true 
system is strictly decreasing, and this can be seen in Figure 8. In this figure, the solid line 



Robust Model Predictive Control for Time Delayed Systems 
with Optimizing Targets and Zone Control 



367 



represents the true cost function, while the dotted line represents the cost corresponding to 
model 3. It is interesting to observe that this last cost function is not decreasing, since the 
estimated state does not match exactly the true state. Note also that in the last period of 
time, the cost does not reach zero, as the new target is not inside the input feasible set. 



2.5 r 



x 10 



1.5i 



> 



0.5 



\ 



50 

time (min) 



5 
4.5 

4 
3.5 

3 
g 2.5 

2 
1.5 

1 
0.5 





x 10 



60 80 100 



8 


x10 




7 


^ » 


6 


i \ 


5 


&-..: 


g 4 


1 1 i 


3 


" 


2 


] 


1 
n 





100 150 

time (min) 



Fig. 8. Cost function corresponding to the true system (solid line) and cost corresponding to 
model 3 (dotted line). 



Output 



j/max 



yi(°Q 

y 2 ( c) 

■V3(°q 



510 
400 
350 



550 
500 
500 



Table 3. New output zones for the FCC subsystem 

Next, we simulate a change in the output zones. The new bounds are given in Table 3. 
Corresponding to the new control zones, the input feasible set changes its dimension and 
shape significantly. In Figure 9, S* (@ x ) corresponds to the initial feasible set for the true 
model, and 3 U (6> a ), $^(<9 2 ) and 3 U (<9 3 ) represent the new input feasible sets for the three 
models considered in the robust controller. Since the input target is outside the input 
feasible set 3* = 3* (0 X ) fl &£ (@ 2 ) fl &£ (@ 3 ) , it is not possible to guide the system to a point 
in which the control cost is null at the end of the simulation time. When the output weight 
S y is as large as the input weight S U/ all the outputs are guided to their corresponding zones, 
while the inputs show a steady state offset with respect to the target u a des . The complete 
behavior of the outputs and inputs of the FCC subsystem, as well as the output set-points, 
can be seen in Figures 10 and 11, respectively when S =W 3 * diag(l 1 l) and 
S u = 10 3 * diag(l l) . The final stationary value of the input is u= [155 84], which represents 
the closest feasible input value to the target u a des . Finally, Figure 12 shows the control cost of 



368 



Robust Control, Theory and Applications 



the two simulated time periods. Observe that in the last period of time (from 51min to 100 
min) the true cost function does not reach zero since the change in the operating point 
prevents the input and output constraints to be satisfied simultaneously. 



100 



90 



80 



70 



60 






40 




final 

stationary u 



_l I I L 



110 120 130 140 150 160 170 180 190 200 210 220 

u1 

Fig. 9. Input feasible sets for the FCC subsystem when a change in the output zones is 
introduced. 



550 



500 



10 20 30 40 50 60 70 80 90 100 

time (min) 




CO 



600 
400 



70 80 90 100 



10 20 30 40 50 6 

time (min) 
Fig. 10. Controlled outputs and set points for the FCC subsystem with modified zones. 



Robust Model Predictive Control for Time Delayed Systems 
with Optimizing Targets and Zone Control 



369 



250 r 



200 
150 



100 L 



100 



10 20 30 40 50 60 70 

time (min) 



90 100 



CM 

3 



80 



60 



40 



A 




i\ ; 








\r 






v^ 












^^_ y^ 


\ 






\/^^ 



10 20 30 40 50 60 70 80 90 100 

time (min) 
Fig. 11. Manipulated inputs for the FCC subsystem with modified output zones. 



10r 



2.5 



1.5 



0.5 



10 20 30 40 50 60 70 80 90 100 

time (min) time (min) 

Fig. 12. Cost function for the FCC subsystem with modified zones. True cost function (solid 
line); Cost function of Model 3 (dotted line). 



7. Conclusion 

In this chapter, a robust MPC previously presented in the literature was extended to the 
output zone control of time delayed system with input targets. To this end an extended 



370 Robust Control, Theory and Applications 

model that incorporates additional states to account for the time delay is presented. The 
control structure assumes that model uncertainty can be represented as a discrete set of 
models (multi-model uncertainty). The proposed approach assures both, recursive 
feasibility and stability of the closed loop system. The main idea consists in using an 
extended set of variables in the control optimization problem, which includes the set point 
to each predicted output. This approach introduces additional degrees of freedom in the 
zone control problem. Stability is achieved by imposing non-increasing cost constraints that 
prevent the cost corresponding to the true plant to increase. The strategy was shown, by 
simulation, to have an adequate performance for a 2x3 subsystem of a typical industrial 
system. 

8. References 

Badgwell T. A. (1997). Robust model predictive control of stable linear systems. International 

Journal of Control, 68, 797-818. 
Gonzalez A. H.; Odloak D.; Marchetti J. L. & Sotomayor O. (2006). IHMPC of a Heat- 
Exchanger Network. Chemical Engineering Research and Design, 84 (All), 1041-1050. 
Gonzalez A. H. & Odloak D. (2009). Stable MPC with zone control. Journal of Process 

Control, 19, 110-122. 
Gonzalez A. H.; Odloak D. & Marchetti J. L. (2009) Robust Model Predictive Control with 

zone control. IET Control Theory Appl., 3, (1), 121-135. 
Gonzalez A. H.; Odloak D. & Marchetti J. L. (2007). Extended robust predictive control of 

integrating systems. AIChE Journal, 53 1758-1769. 
Kassmann D. E.; Badgwell T. & Hawkings R. B. (2000). Robust target calculation for model 

predictive control. AIChE Journal, 45 (5), 1007-1023. 
Muske K.R. & Badgwell T. A. (2002). Disturbance modeling for offset free linear model 

predictive control. Journal of Process Control, 12, 617-632. 
Odloak D. (2004). Extended robust model predictive control. AIChE Journal, 50 (8) 1824-1836. 
Pannochia G. & Rawlings J. B. (2003). Disturbance models for offset-free model-predictive 

control. AIChE Journal, 49, 426-437. 
Qin S.J. & Badgwell T. A. (2003). A Survey of Industrial Model Predictive Control 

Technology, Control Engineering Practice, 11 (7), 733-764. 
Rawlings J. B. (2000). Tutorial overview of model predictive control. IEEE Control Systems 

Magazine, 38-52. 
Sotomayor O. A. Z. & Odloak D. (2005). Observer-based fault diagnosis in chemical plants. 

Chemical Engineering Journal, 112, 93-108. 
Zanin A. C; Gouvea M. T. & Odloak D. (2002). Integrating real time optimization into the 

model predictive controller of the FCC system. Contr. Eng. Pract., 10, 819-831. 



16 



Robust Fuzzy Control of 

Parametric Uncertain Nonlinear Systems 

Using Robust Reliability Method 

Shuxiang Guo 

Faculty of Mechanics, College of Science, Air Force Engineering University XV an 710051, 

P R China 

1. Introduction 

Stability is of primary importance for any control systems. Stability of both linear and 
nonlinear uncertain systems has received a considerable attention in the past decades (see 
for example, Tanaka & Sugeno, 1992; Tanaka, Ikeda, & Wang, 1996; Feng, Cao, Kees, et al. 
1997; Teixeira & Zak, 1999; Lee, Park, & Chen, 2001; Park, Kim, & Park, 2001; Chen, Liu, & 
Tong, 2006; Lam & Leung, 2007, and references therein). Fuzzy logical control (FLC) has 
proved to be a successful control approach for a great many complex nonlinear systems. 
Especially, the well-known Takagi-Sugeno (T-S) fuzzy model has become a convenient tool 
for dealing with complex nonlinear systems. T-S fuzzy model provides an effective 
representation of nonlinear systems with the aid of fuzzy sets, fuzzy rules and a set of local 
linear models. Once the fuzzy model is obtained, control design can be carried out via the so 
called parallel distributed compensation (PDC) approach, which employs multiple linear 
controllers corresponding to the locally linear plant models (Hong & Langari, 2000). It has 
been shown that the problems of controller synthesis of nonlinear systems described by the 
T-S fuzzy model can be reduced to convex problems involving linear matrix inequalities 
(LMIs) (Park, Kim, & Park, 2001). Many significant results on the stability and robust control 
of uncertain nonlinear systems using T-S fuzzy model have been reported (see for example, 
Hong, & Langari, 2000; Park, Kim, & Park, 2001; Xiu & Ren, 2005; Wu & Cai, 2006; 
Yoneyama, 2006; 2007), and considerable advances have been made. However, as stated in 
Guo (2010), many approaches for stability and robust control of uncertain systems are often 
characterized by conservatism when dealing with uncertainties. In practice, uncertainty 
exists in almost all engineering systems and is frequently a source of instability and 
deterioration of performance. So, uncertainty is one of the most important factors that have 
to be taken into account rationally in system analysis and synthesis. Moreover, it has been 
shown (Guo, 2010) that the increasing in conservatism in dealing with uncertainties by some 
traditional methods does not mean the increasing in reliability. So, it is significant to deal 
with uncertainties by means of reliability approach and to achieve a balance between 
reliability and performance/ control-cost in design of uncertain systems. 
In fact, traditional probabilistic reliability methods have ever been utilized as measures of 
stability, robustness, and active control effectiveness of uncertain structural systems by 
Spencer et al. (1992,1994); Breitung et al. (1998) and Venini & Mariani (1999) to develop 



372 Robust Control, Theory and Applications 

robust control strategies which maximize the overall reliability of controlled structures. 
Robust control design of systems with parametric uncertainties have also been studied by 
Mengali and Pieracci (2000); Crespo and Kenny (2005). These works are meaningful in 
improving the reliability of uncertain controlled systems, and it has been shown that the use 
of reliability analysis may be rather helpful in evaluating the inherent uncertainties in 
system design. However, these works are within the probabilistic framework. 
In Guo (2007,2010), a non-probabilistic robust reliability method for dealing with bounded 
parametric uncertainties of linear controlled systems has been presented. The non- 
probabilistic procedure can be implemented more conveniently than probabilistic one 
whether in dealing with the uncertainty data or in controller design of uncertain systems, 
since complex computations are often associated with in controller design of uncertain 
systems. In this chapter, following the basic idea developed in Guo (2007, 2010), we focus on 
developing a robust reliability method for robust fuzzy controller design of uncertain 
nonlinear systems. 

2. Problem statements and preliminary knowledge 

Consider a nonlinear uncertain system represented by the following T-S fuzzy model with 
parametric uncertainties: 
Plant Rule i: 

IF x x (t) is F n and ... andx n (t) is F in , 

THEN x(t) = A z (p)x(f ) + B z (p)u(f ), (i = 1, . . . , r ) U 

Where K is a fuzzy set, x(t) e R n is the state vector, u(t) e R m is the control input vector, r 
is the number of rules of the T-S fuzzy model. The system matrices A(p) and B(p) depend 
on the uncertain parameters p = { p x , p 2 , • • • , p v } . 
The defuzzified output of the fuzzy system can be represented by 



x(t ) = £ //,- (x(f))[ A, (p)x(t) + B, (p)u(t)] (2) 



In which 



JU { (*(*)) = CO, (*(*))/ ^ CO { (*(*)) , CD { (*(*)) = Y[ F ij ( X j V )) ( 3 ) 

/ 1=1 j=\ 

Where F i j(Xj(t)) is the grade of membership of xAt) in the fuzzy set R- , a>i(x(t)) satisfies 
a>i (x(t)) > for all i ( i - 1,. . .,r ). Therefore, there exist the following relations 

^■W0)^(i = l r), ^ Mi (x(t)) = \ (4) 

i=\ 

If the system (1) is local controllable, a fuzzy model of state feedback controller can be stated 
as follows: 

Control Rule i: 

IF x x (t) is F n and ... and x n (t) is F fn/ THEN u(t) = K i x(t) / (f = l,...,r) (5) 



Robust Fuzzy Control of 

Parametric Uncertain Nonlinear Systems Using Robust Reliability Method 373 

Where Kj <=R mxn (f = l,...,r) are gain matrices to be determined. The final output of the 
fuzzy controller can be obtained by 

r 

u(t) = ^ Ml (x(t))K lX (t) (6) 

i=\ 

By substituting the control law (6) into (2), we obtain the closed-loop system as follows 

r r 

* (t) = Z Z Mi (x(t))M i (x(t))[ Ai (p) + Bi (p)K i ]x(t) (7) 

When the parameterized notation (Tuan, Apkarian, and Narikiyo 2001) is used, equations 
(6) and (7) can be rewritten respectively as 

«(0 = K(ju)x(t) (8) 

x(t) = {A(p,ju) + B(p,ju)K(ju))x(t) (9) 

Where 

M = (Mi(x(t)) 9 ... 9 Mx(t)))sn^l M GR^ (10) 

r r r 

K0i) = Y,M*(!Wi > A(p, J u) = Y J M l (x(t))A l (p) / B(p,ju) = Y,Vi(x(t))Bi(p) (11) 
i=\ f=l i=l 

Note that the uncertain parameters p = { pj , p 2 , • • • , p p } are involved in the expressions of (9) 
and (11). Following the basic idea developed by Guo (2007,2010), the uncertain-but-bounded 
parameters p = {pi,p 2 ,'-,p p } involved in the problem can be considered as interval 
variables and expressed in the following normalized form 

Pi=Pio+Pid#i (i = l,...,p) (12) 

where p iQ and p^ are respectively the nominal and deviational values of the uncertain 
parameter p { , Sj e [-1,1] is a standard interval variable. Furthermore, the system matrices 
are expressed in a corresponding form of that depend on the standard interval variables 
5 = [Si,S 2 ,'",S p ] . Suppose that the stability of the control system can be reduced to solving 
a matrix inequality as follows 

M(5,P 1 ,P 2 ,...,P / )<0 (13) 

where, Pi,P 2 ,..,Pi are feasible matrices to be determined. The sign " < " indicates that the 
matrix is negative-definite. 

If the performance function (it may also be referred to as limit-state function) used for 
reliability analysis is defined in terms of the criterion (13) and represented by 
M = M(5,P 1 ,P 2 ,...,P/) , and the reliable domain in the space built by the standard variables 



374 Robust Control, Theory and Applications 

6 = [Si,S 2 ,--;S p ] is indicated by Q r = {5 : M(8, P { , P 2 , . . . , P z ) < 0} , then the robust reliability 
can be given as follows 

rj r = sup||5| oo :M(5,P 1 ,P 2 ,...,P z )<0}-l (14) 

8eR? 

Where, \\d\\ denote the infinity norm of the vector 8 = [Si,S 2 , •••,$„]. Essentially, the robust 
reliability rj r defined by (14) represents the admissible maximum degree of expansion of 
the uncertain parameters in the infinity topology space built by the standard interval 
variables under the condition of that (13) is satisfied. If r/ r > holds, the system is reliable 
for all admissible uncertainties. The larger the value of r/ r , the more robust the system with 
respect to uncertainties and the system is more reliable for this reason. So it is referred to as 
robust reliability in the paper as that in Ben-Haim (1996) and Guo (2007,2010). 
The main objective of this chapter is to develop a method based on the robust reliability idea 
to deal with bounded parametric uncertainties of the system (1) and to obtain reliability- 
based robust fuzzy controller (6) for stabilizing the nonlinear system. 
Before deriving the main results, the following lemma is given to provide a basis. 
Lemma 1 (Guo, 2010). Given real matrices Y, E l ,E 2 ,---,E n , F l ,F 2 , ..., and F n with 
appropriate dimensions and Y = Y , then for any uncertain matrices A x = diag{Si j , • -' 9 S\ m } , 
A 2 =diag{S 2U '-,S 2m2 } ,•••, and A n =diag{S nl ,—,S nmn } satisfying \d^<a (i = l,...,n, 
j = 1,. ..,m n ), the following inequality holds for all admissible uncertainties 

Y + ^( £ , A F, + f; T 4 T E, T )<0 (15) 

1=1 

if and only if there exist n constant positive-definite diagonal matrices H { , H 2 , ..., and H n 
with appropriate dimensions such that 

n , , 

Y + ^yEiHiEf +a 2 F^H; l F i )<0 (16) 

i=\ 

3. Methodology and main results 

3.1 Basic theory 

The following commonly used Lyapunov function is considered 

V(x(t)) = x(t) T Px(t) (17) 

where P is a symmetric positive definite matrix. The time derivative of V(x(t)) is 

V(x(t)) = x(t) T Px(t) + x(tf Px(t) (18) 

Substituting (9) into (18), we can obtain 

y(x(0) = * T (0{(A(/^^ (19) 

So, V(x(t)) < is equivalent to (20) and further equivalent to (21) that are represented as 
follows 



Robust Fuzzy Control of 

Parametric Uncertain Nonlinear Systems Using Robust Reliability Method 375 

(A(p, M ) + B(p, M )K( M )) T P + P(A(p, M ) + B(p, M )K( M ))<0, M eI3 (20) 

{A(p, M )X + B(p, M )Y( M )) T +{A(p, M )X + B(p, M )Y(ju))<0 r ^ e n (21) 

In which, X = P~ , Y(ju) = K(ju)X possess the following form 

r r 

Y0/) = £/4(*(0)li =^Mx(t))K t X (i = l,...,r) (22) 

i'=i j=i 

Let 

Q ij (p,X,Y j ) = (A i (p)X + B i (p)Y j ) + (A i (p)X + B i (p)Y j ) T (i,j = \,...,r ) (23) 

then (21) can be written as 

r r 
Y^ViMjQij(p,X,*j)<0 (24) 

Some convex relaxations for (24) have been developed to make it tractable. Two type of 
relaxation are adopted here to illustrate the presented method. 

3.1.1 A simple relaxation of (24) represented as follows is often used by authors (Lee, Park, 
& Chen 2001) 

Q ll (p,X,Y l )<0, Q ij (p,X,Y j ) + Q ji (p,X,Y i )<0 (l<i<j<r) (25) 

These expressions can be rewritten respectively as 

(A ; {p)X + B i (p)Y i ) + (A ; (p)X + B i (p)Y i ) T <0 (i = l,...,r) (26) 



(A,-(p)X + B,-(p)Y,-) + (A,(p)X + B,(p)Y ; ) T + (A ; (p)X + B^p)^) 



+(A ; (p)X + B y (p)Y,) T <0 (l<i<j<r) 



(27) 



Expressing all the uncertain parameters p = {p l ,p 2 ,--,p p } as the standard form of (12), 
furthermore, the system matrices are expressed as a corresponding form of that depend on 
the standard interval variables 8 = [Si,S 2 ,'~,S p ]. Without loss of generality, suppose that all 
the uncertain matrices A z (p) and B z (p) can be expressed as 

V 1 

A i (p) = A io +Y, A ij S ij< B i(P) = B io + Ys BikSik ( i = l > — r ) ( 28 ) 

In which, A i0 , B i0 , A» , and B ik are known real constant matrices determined by the 
nominal and deviational values of the basic variables. To reduce the conservatism caused by 
dealing with uncertainties as far as possible, representing all the matrices A» and B ik as 
the form of the vector products as follows 



376 



Robust Control, Theory and Applications 



Aq=VinV$ 2 , B lk =U, n ul 2 («=l,...,r, j=\...,V. k = l,...,q) 
In which, V#i / ^72 ' ^zfci / anc ^ ^ikl are a ^ column vectors. Denoting 



(29) 



V,i=[v,n V, 2 



V 



. V,2=[V, 12 v, 2 



>2 



u,i=[u,ii u,2i - u,,i]; u, 2 = [u,i2 u,22 - u„ 2 ] r , 

A,-i = <%{$! ,• • • , <5j p }; A, 2 = rfkg{ J n ,• • • , S iq }; (i = l,...,r) 



(30) 



where, the first four matrices are constructed by the column vectors involved in (29). Then, 
the expressions in (28) can be further written as 

A i (p) = A i0 +V n A n V i2 , B i (p) = B i0 +U il A i2 U i2 (i=l r) (31) 

Substituting (31) into equations (26) and (27), we can obtain 

(A, X + B, Y,) + (A, X + B, Y,) T + (V fl A fl V„X) + (V fl A fl V f2 X) r 
+(U,A2U, 2 Y,) + (U, 1 A, 2 U, 2 Y,) T <0 (i = l r) 

(A, X + B,- Y, ) + (A, X + B, Y, ) T + (V n A a V i2 X) + ( V fl A n V, 2 X) T 

+ (UaA,-2U,. 2 Y i ) + (U,. 1 A i2 U i2 Y i ) T + 
+(A ;0 X + B ;0 Y,) + (A y0 X + B ;0 Y,) T + (V ;1 A ;1 V y2 X) + (V ;1 A ;1 V ;2 X) T 

+ (U ; iA ; . 2 U ;2 Y,) + (U ;1 A ; . 2 U ;2 Y,) r < (1 < i < j < r) 

In terms of Lemma 1, the matrix inequality (32) holds for all admissible uncertainties if and 
only if there exist diagonal positive-definite matrices E zl and E i2 with appropriate 
dimensions such that 



(32) 



(33) 



(A, X + B, Y,) + (A, X + B, Y,) T + V (1 E,X + a 2 (V f2 X) T E;?(V f2 X) + 
+U il E i2 U[ 1 +« 2 (U <2 Y,) T E- 2 1 (U, 2 Y,)<0 (i = l r) 



(34) 



Similarly, (33) holds for all admissible uncertainties if and only if there exist constant 
diagonal positive-definite matrices H«i , H^ 2 , Hm , and Hy 4 such that 

(A,- X + B, Y ; ) + (A,- X + B, Y y ) T + V (1 H p V^ + V /a Hg 2 v£ + 
+U, 1 H <;3 Uf 1 + U ;1 H §4 U)i + (A y0 X + B ; . Y,) + 
(A ;0 X + B ;0 Y,) T + a 2 (V, 2 X) r H^(V, 2 X) + a 2 (V ;2 X) r H^(V ;2 X) + 
+a 2 (U, 2 Y 7 ) r H^ 3 (U, 2 Y 7 ) + « 2 (U y2 Y,) T H^ 4 (U y2 Y,) < (1 < i < j < r) 
Applying the well-known Schur complement, (34) and (35) can be written respectively as 



(35) 



aXVf 2 (aU i2 Yj) T 



aV l2 X -E n 
aU i2 Y 



<0 (i = l,...,r) 



(36) 



Robust Fuzzy Control of 

Parametric Uncertain Nonlinear Systems Using Robust Reliability Method 



377 



Ta 



aV l2 X 


- Hij X 


* 


aV j2 X 





-H 


aU i2 Yj 








aUj 2 Yi 









ij2 



H 



ifl 



- Hij 4 



<0 ( 1 < z < ; < r J 



(37) 



In which, 

S, = (A, X + B, Y,.) + (A,. X + B,. Y,.) T + V a E a vS + U n E i2 ul, 

r, y = (A, X + B, Y ; .) + (A, X + B, Y ; .) T + {A^X + B^ + iA^X + B^f + V a H^yl + 

v n n ij2 vl + UnHpuZ + u n H ijA u T n . 



denotes the transposed matrices in the 



symmetric positions. 

Consequently, the following theorem can be obtained. 

Theorem 3.1. For the dynamic system (2) with the uncertain matrices represented by (31) 
and \<5 m | < a ( m = 1,. . .,/? ), it is asymptotically stabilizable with the state feedback controller 
(6) if there exist a symmetric positive-definite matrix X , matrices Y z , and constant diagonal 
positive-definite matrices E zl , E i2 , H^ , H i; - 2 > H« 3 , and H z ; 4 ( 1 < i < j < r ) such that the 
LMIs represented by (36) and (37) hold for all admissible uncertainties. If the feasible 
matrices X and Y z are found out, then the feedback gain matrices deriving the fuzzy 
controller (6) can be obtained by 



K i =Y i X' 1 (z = l,...,r) 



(38) 



It should be stated that the condition of (25) is restrictive in practice. It is adopted yet here is 
merely to show the proposed reliability method and for comparison. 

3.1.2 Some improved relaxation for (24) have also been proposed in literatures. A relaxation 
provided by Tuan, Apkarian, and Narikiyo (2001) is as follows 



Q„(P,X,Y ; )<0 (i=l r) 



Qu(p,X,Y i ) + ^-(Q ll (p,X,Y J ) + Q JI (p,X,Y l ))<0(lZi*iZr) 



(39) 
(40) 



The expression (39) is the same completely with the first expression of (25). So, only (40) is 
investigated further. It can be rewritten as 



(A,.(p)X + B ; (p)Y,) + (A,.(p)X + B,(p)Y,.) r + 
^{(A 1 .(p)X + B,.(p)Y ; ) + (A 1 .(p)X + B,(p)Y ; ) T + 

+(A ; (p)X + B ; (p)Y,.) + (A ; (p)X + B ; (p)Y ; ) r }<0 (l<i*;<r) 
On substituting the expression (31) into (41), we obtain 



(41) 



378 



Robust Control, Theory and Applications 



{(A i0 X + B i0 Y f ) + (A f0 X + B f0 Y f ) T } + ^{(A i0 X + B z0 Y ; ) + (A i0 X + B z0 Y ; ) T } + 

+ ^i{(A ;0 X + B ;0 Y Z ) + (A ;0 X + B ;0 Y Z ) T } + ^{(V fl A fl V i2 X) + (V fl A fl V i2 X) T } + 

+(U ll A f2 U i2 Y i ) + (U zl A z2 U z2 Y z ) T + (^U zl A z2 U z2 Y ; ) + (^U zl A z2 U z2 Y ; ) T 

{(V ;1 A ;1 V ;2 X) + (V ;1 A ;1 V ;2 X) T + (U ;1 A ; . 2 U ; . 2 Y f ) + (U ;1 A ;2 U ;2 Y Z ) T } < 

(1 < i * j < r) 



(42) 



r-1 

2 



In terms of Lemma 1, the matrix inequality (42) hold for all admissible uncertainties if and 
only if there exist constant diagonal positive-definite matrices F n , F i2 , F i3 , H^ , and H i; - 2 
such that 

{(A z0 X + B i0 Y f ) + (A i0 X + B i0 Y f ) T } + ^{(A i0 X + B i0 Y ; .) + (A i0 X + B i0 Y/} 



r-1 






v 2/tr V\Tt7-1/ 



2 {(A ; . X + B j0 X) + (A ; . X + B ; . Y,) r j + [ — V (1 |F n | ^ V, , | -t a\ V, 2 X)< F ;1 ' (V, 2 X) 



+U, 1 F, 2 Uf 1 +« 2 (U, 2 Y,) r F- 2 1 (U, 2 Y,)} + ^U, 1 jH, yl ^U, 1 J + « 2 (U, 2 Y 7 ) r H^(U, 2 Y y ) (43) 

^V ;1 ]F ; ,[l^V ;1 ]\[l^U ;1 ]H, 2 [l^U ;1 ]\« 2 (V ;2 X) r F-(V ;2 X) 

+ « 2 (U ; . 2 Y,) r H^ 2 (U ; . 2 Y,)} < (1 < i * j < r) 
Applying the Schur complement, (43) is equivalent to 



w 

V 


* 


* 


aV i2 X 


-F,i 


* 


aU l2 Y t 





-F i2 



aV j2 X -F j3 * 

aU i2 Y j -H in 
aU j2 Y { 



H 



,/2 



<0 (\<i*j<r) 



(44) 



r-1 



In which, W t] = {(A, X + B, Y,) + (A, X + B, Y,) T } +— {(A, X + B, Y ; ) + (A, X + B, Y ; ) T } 
h^{(A ;0 X + B y0 Y,) + (A ;0 X + B y0 Y,) T } + f ^V a jp a ^V ;l j + U fl F f2 u£ + 



2 
r-1 



V,! R 



;i r ;3 



r-1 



r-1, 



2" V H + ^ Ul1 % 2 



r-1 



r-1, 

2~ 



U a | + |— U ;1 H, y2 



^ . • 



This can be summarized as follows. 

Theorem 3.2. For the dynamic system (2) with the uncertain matrices represented by (31) 
and \S m \ < a (m = l,...,p ), it is asymptotically stabilizable with the state feedback controller 
(6) if there exist a symmetric positive-definite matrix X , matrices Y f , and constant diagonal 



Robust Fuzzy Control of 

Parametric Uncertain Nonlinear Systems Using Robust Reliability Method 



379 



positive-definite matrices E a , E i2 , F n , F i2 , F i3 , H^ , and H i; - 2 (l<i^ j <r) such that the 
LMIs (36) and (44) hold for all admissible uncertainties. If the feasible matrices X and Y { 
are found out, the feedback gain matrices deriving the fuzzy controller (6) can then be given 
by (38). 

3.2 Robust reliability based stabilization 

In terms of Theorem 3.1, the closed-loop fuzzy system (7) is stable if all the matrix 
inequalities (36) and (37) hold for all admissible uncertainties. So, the performance functions 
used for calculation of reliability of that the uncertain system to be stable can be taken as 



M l (a,X,Y n E n ,E l2 ) = 



M^a 9 XX,Yj,Hyl,Hij2,Hy3,Hij4) = 



1—1 i 


aXV] 2 


(aU l2 Y t ) T 


aV i2 X 


~E n 





aU i2 Y t 





~E i2 


r -j 


* 


* * 


aV i2 X 


-H-iji 


* * 


aV j2 X 





- H * 

n ij2 


aU il Y j 





-H 


aU ]2 Y t 









(i = l,...,r) 



(45) 



//3 



-H 



ijA 



(1 < i < j < r) (46) 



in which, the expressions of E { and T f are in the same form respectively as that in (36) and 

(37). 

Therefore, the robust reliability of the uncertain nonlinear system in the sense of stability 

can be expressed as 

/7 r = sup {tf:M z KX,X,E zl ,E^^ (47) 

aeR + 

where, R + denotes the set of all positive real numbers. The robust reliability of that the 
uncertain closed-loop system (7) is stable may be obtained by solving the following 
optimization problem 



Maximize a 

Subject to M,(«,X,Y,,E, 1 ,E, 2 )<0 M, j (a ; X,Y,Y i ,H, n ,H,p,H, ]3 ,H,- 4 )<0 

E n > 0, E, 2 > X > 0, H.j > 0, H,- 2 > 0, H, 3 > 0, H,- 4 > (1 < i < j < r) 



(48) 



From the viewpoint of robust stabilizing controller design, if inequalities (36) and (37) hold 
for all admissible uncertainties, then there exists a fuzzy controller (6) such that the closed- 
loop system (7) to be asymptotically stable. Therefore, the performance functions used for 
reliability-based design of control to stabilize the uncertain system (2) can also be taken as 
that of (45) and (46). So, a possible stabilizing controller satisfying the robust reliability 
requirement can be given by a feasible solution of the following matrix inequalities 



M,(a ,X,Y,,E, 1 ,E, 2 )<0, M {j {a ,X,Y J ,Y y ,Hp,H f/2 ,H f/3 ,H ( ,. 4 )<0; E, 1 >0E, 2 >0, 
X>0 / Hp>0,H, 2 >0 / H !/3 >0,H ;4 >0 (l<i<j<r) a =-q a +\ 



(49) 



380 



Robust Control, Theory and Applications 



In which, M { (•) and M,y (•) are functions of some matrices and represented by (45) and (46) 

respectively. r/ cr is the allowable robust reliability. 

If the control cost is taken into account, the robust reliability based design optimization 

of stabilization controller can be carried out by solving the following optimization 

problem 



Minimize Trace(N); 

Subject to M { (a ,X,%,E n ,E i2 ) < M zy (a ,X, Y f/ Y^H^H^H^,] 

E fl >0, E f2 >0, H tjl >0, H zy2 >0, H zy3 >0, % >0, (l<z<;<r) 

n r 



(50) 



I X 



>0, X>0,N>0, a =?] cr +l 



In which, the introduced additional matrix N is symmetric positive-definite and with the 
same dimension as X . When the feasible matrices X and Y f are found out, the optimal 
fuzzy controller could be obtained by using (6) together with (38). 
If Theorem 3.2 is used, the expression of M {] ,(■) corresponding to (46) becomes 



M ii (a,X,Y i ,Y j ,F a ,F i2 ,F a ,H ijl ,H ii 2)- 



V„ 



aV i2 X 


~Fn 


X- 


* 


aU i2 Y { 





-Fa 


* 


aV j2 X 








-F 


aU i2 Y j 











a\l ]2 Y x 












o 



H 



ijl 



(l<i*j<r) (51) 



where, W^ is the same with that in (44). Correspondingly, (47) and (48) become respectively 
as follows. 



^ = sup{a:M,(a,X / Y, / E, 1/ E, 2 )<0 / M^(a,X,Y,,Y ; ,F, 1 ,F, 2/ F, 3/ H, yl ,H y . 2 )<0, 

aeR + 

E fl >0, E i2 >0, X>0,F zl >0,F z2 >0,F z3 >0,H z;/1 >0,Hy 2 >0, l<i*j<r}-l 



(52) 



Maximize a 

Subject to M z («,X,Y z ,E zl ,E z2 ) < M zy (a,X,Y z ,Y 7 ,F zl ,F z2 ,F z3 ,H zyi ,H z;2 ) < 

E zl > 0,E f2 > 0,X > 0,F fl > 0,F z2 > 0,F z3 > 0,H z;/1 > 0,H z)2 > 0, (l<i*j< r) 



Similarly, (50) becomes 



(53) 



Minimize Trace(N) 

Subject to M f (a*,X, Y z , E zl , E z2 ) < 0, M zy (a\X,Y z ,Y ; ,F zl ,F z2 ,F z3 ,H zyi ,H f;2 ) < 

E zl >0,E z2 >0,F zl >0,F z2 >0,F z3 >0,H zyi >0,H zy2 >0, (l<i±j<r) 

n r 



I X 



>0, X>0,N>0, a =?] cr +l 



(54) 



Robust Fuzzy Control of 

Parametric Uncertain Nonlinear Systems Using Robust Reliability Method 



381 



3.3 A special case 

Now, we consider a special case in which the matrices of (30) is expressed as 

V a =V„ V i2 = V 2 , A n =A, U a =U i2 =0 (» = 1 r) 



(55) 



This means that the matrices A, (p) in all the rules have the same uncertainty structure and 
the matrices B, (p) become certain. In this case, (31) can be written as 



A i (p) = A l0 +V l AV 2 , B,(p) = B t0 (» = l,...,r) 



(56) 



and the expressions involved in Theorem 3.1 can be simplified further. This is summarized 
in the following. 

Theorem 3.3. For the dynamic system (2) with the matrices represented by (56) and \S m \ < a 
(m = l,...,p), it is asymptotically stabilizable with the state feedback controller (6) if there 
exist a symmetric positive-definite matrix X , matrices Y t , and constant diagonal positive- 
definite matrices E { and H^ (\<i< j <r) with appropriate dimensions such that the 
following LMIs hold for all admissible uncertainties 



dXVi 



aV 2 X 



<0, 



aV 2 X 



aXV 2 L 



< ( 1 < i < j < r ) 



(57) 



In which, E t = (A l0 X + B l0 Y l ) + (A l0 X + B l0 Y l ) T + V l E ll V l T f 

I {] = (A i0 X + B i0 Y j ) + ( A i0 X + B i0 Y j ) T + (A ;0 X + B j0 Y { ) + ( A j0 X + B j0 Y t ) T + 
(2V 1 )H- ; (2y i ) T . If the feasible matrices X and Y { are found out, the feedback 

gain matrices deriving the fuzzy controller (6) can then be given by (38). 
Proof. In the case, inequalities (32) and (33) become, respectively, 



(A z0 X + B 20 Y 2 ) + (A 20 X + B 20 Y 2 ) r +(V 1 Ay 2 X) + (y i Ay 2 X) T <0 (i = l r) 



(58) 



(A z0 X + B z0 Y ; ) + (A z0 X + B z0 Y j ) T +2(V 1 A V 2 X) + 2(V a A V 2 X) T 
+(A ;0 X + B ;0 Y Z ) + (A ;0 X + B ;0 Y Z ) T < (1 < i < j < r) 



(59) 



In terms of Lemma 1, (58) holds for all admissible uncertainties if and only if there exist 
diagonal positive-definite matrices E { ( i = 1, . . .,r ) with appropriate dimensions such that 

(A i0 X + B, Y,) + (A, X + B i0 X) T + V X E { V? + a 2 (V 2 X) T E; l (V 2 X)<0 (i = l,...,r) (60) 

Similarly, (59) holds for all admissible uncertainties if and only if there exist constant 
diagonal positive-definite matrices EL- such that 



(A, X + B,. Y ; .) + (A, X + B, Y ; .) T + (A j0 X + B j0 Y ; ) + 

+(A ;0 X + B ;0 Y,) T + (2V 1 )H # (2V 1 ) T + « 2 (V 2 X) T H^(V 2 X) < (1 < i < j < r) 



(61) 



Applying Schur complement, (57) can be obtained. So, the theorem holds. 

By Theorem 3.3, the performance functions used for reliability calculation can be taken as 



382 



Robust Control, Theory and Applications 



M i (a / X / Y f/ E i ) = 



aXVi 



aV 7 X 



, (i = l,...,r) 



M f/ (a / X / Y i/ Y // H i/ ) = 



aXVj 



aV 2 X -H f; 



(62) 



(1 < f < j < r) 



Accordingly, a possible stabilizing controller satisfying robust reliability requirement can be 
obtained by a feasible solution of the following matrix inequalities 



M i (a,X,Y i ,E i )<0,M ij (a,X,\ i ,Y j ,H ij )<0, X>0, E z >0; H i; >0 (l<i<j<r) 
a = r/ cr + 1 



(63) 



The optimum stabilizing controller based on the robust reliability and control cost can be 
obtained by solving the following optimization problem 

Minimize Trace(N) 

Subject to M { (a ,X, Y f/ E f ) < 0, M {j (a ,X,\, Y ; -,H i; .) < 0; E f > 0,Hy > 0, (1 < z < ; < r) (64) 

N r 



I X 



>0, X>0,N>0, a =?] cr +l 



Similarly, the expressions involved in Theorem 3.2 can also be simplified and the 
corresponding result is summarized in the following. 

Theorem 3.4. For the dynamic system (2) with the matrices represented by (56) and \S m | < a 
(m = l,...,p), it is asymptotically stabilizable with the state feedback controller (6) if there 
exist a symmetric positive-definite matrix X , matrices Y { , and constant diagonal positive- 
definite matrices E { and H^ ( 1 < z ^ j <r) with appropriate dimensions such that the 
following LMIs hold for all admissible uncertainties 



aXVj 



aV 2 X 



<0, 



w. 



(aV 2 Xf 



(aV 2 X) 



H V1 



< ( 1 < i * j < r ; 



(65) 



In which, S t = (A l0 X + B l0 Y l ) + (A l0 X + B l0 Y l ) T + V^V* , 

W {j = (rV^irV.f+iiA^X + B^) + (A l0 X + B l0 Y l ) T ) 

+ L^{(A l0 X + B l0 Y j ) + (A l0 X + B l0 Y j ) T + (A j0 X + B j0 Y l ) + (A j0 X + B j0 Y l ) T }. If the 

feasible matrices X and Y; are found out, the feedback gain matrices deriving 
the fuzzy controller (6) can then be obtained by (38). 
Proof. (42) can be rewritten as 

{(A, X + B, Y,) + (A,- X + B, Y,) T } + ^-{(A, X + B, Y ; ) + (A, X + B, Y ; ) T 

+ (A ;0 X + B ;0 Y,) + (A ;0 X + B ;0 Y,) T j + r {(V X A V 2 X) + (V X A V 2 X) T j < 

(1 < i * ; < r ) 



Robust Fuzzy Control of 

Parametric Uncertain Nonlinear Systems Using Robust Reliability Method 



383 



In terms of Lemma 1, (66) holds for all admissible uncertainties if and only if there exist 
diagonal positive-definite matrices EL- such that 



r-1 



{(A i0 X + B i0 Y f ) + (A i0 X + B i0 Y f ) T } + ^-{(A f0 X + B z0 Y ; ) + (A f0 X + B z0 Y ; ) T + (A ;0 X + B ;0 Y Z ) 
+ (A ;0 X + B ;0 Y z ) T } + (rV 1 )H zy (rV 1 ) T + a 2 (V 2 X) T H^ 1 (V 2 X)<0 l<i*j<r 



(67) 



Applying Schur complement, (67) is equivalent to the second expression of (65). So, the 

theorem holds. 

By Theorem 3.4, the performance functions used for reliability calculation can be taken as 



M z (a,X,Y z ,E z ; 



(aV 2 xf 



(«v 2 x) 



, (i = l,...,r) 



(68) 



M iy (a / X / Y i/ Y ; . / H^.) = 



W* 



(aV 2 X) J 



(aV 2 X) 



H„ 



, (l<i*j<r) 



So, design of the optimal controller based on the robust reliability and control cost could be 
carried out by solving the following optimization problem 

Minimize Trace(N) 

Subject to M 2 (a , X,Y z ,E z )<0, M^c^X^Y^H^) <0, E { >0,H i; - >0 (l<i*j<r) (69) 
N I 



I X 



>0, X>0,N>0 (a =rj cr +l) 



4. Numerical examples 

Example 1. Consider a simple uncertain nonlinear mass-spring-damper mechanical system 
with the following dynamic equation (Tanaka, Ikeda, & Wang 1996) 

x(t) + x(t) + c(t)x(t) = (1 + 0. 1 3x 3 (t))u(t) 

Where c(t) is the uncertain term satisfying c(t) e [0.5,1.81] . 

Assume that x(t) e [-1.5,1.5] , x(t) e [-1.5,1.5] . Using the following fuzzy sets 

FJx(t)) = 0.5 + ?-&, F 2 (x(t)) = 0.5-^-0- 
1V v " 6.75 2V V " 6.75 

The uncertain nonlinear system can be represented by the following fuzzy model 

Plant Rule 1: IF x(t) is about F { , THEN x(t) = A x x(t) + B x ii(t) 
Plant Rule 2: IF x(t) is about F 2 ,THEN x(t) = A 2 x(t) + B 2 u(t) 



Where 



x(t) = 



m 

x(t) 



, A 1 - A 2 



~-l -c 
1 


,B 1 = 


"1.43875" 



,B 2 = 


"0.56125" 




384 



Robust Control, Theory and Applications 



Expressing the uncertain parameter c as the normalized form, c = 1.155 + 0.655£, 
furthermore, the system matrices are expressed as 



A^A^+V.AV,, A 2 = A 20 +V { AV 2 , B { =B l0 , B 2 =B 20 . 



In which 



^10 ~~ ^-20 - 



-1 -1.155 
1 



>Vi = 



V 2 =[0 -0.655], A = S . 



By solving the optimization problem of (69) with a = 1 and a = 3 respectively, the gain 
matrices are obtained as follows 

K 1= [-0.0567 -0.1446], K 2 = [-0.0730 -0.1768] (a* = 1 ); 

#!= [-0.3645 -1.0570], K 2 = [-0.9191 -2.4978] (a =?>). 

When the initial value of the state is taken as x(0) = [- 1 -1.3] , the simulation results of the 
controlled system with the uncertain parameter generated randomly within the allowable 
range c(t) e [0.5,1.81] are shown in Fig. 1. 






Time (sec) 




Time (sec) 



Fig. 1. Simulation of state trajectories of the controlled system (The uncertain parameter c is 
generated randomly within [0.5, 1.81]) 

Example 2. Consider the problem of stabilizing the chaotic Lorenz system with parametric 
uncertainties as follows (Lee, Park, & Chen, 2001) 



Robust Fuzzy Control of 

Parametric Uncertain Nonlinear Systems Using Robust Reliability Method 



385 



pi (')] 




x 2 (t) 


= 


|*30)J 





-crx 1 (f) + ax 2 (t) 

r Xl (t)-x 2 (t)- Xl (t)x 3 (t) 

Xl (t)x 2 (t)-bx 3 (t) 

For the purpose of comparison, the T-S fuzzy model of the chaotic Lorenz system is 
constructed as 

Plant Rule 1 : IF x x {t) is about M 1 THEN x(t) = A a x(f) + B 1 u(*) 
Plant Rule 2 : IF x x (t) is about M 2 THEN x(t) = A 2 x(t) + B 2 u(t) 
Where 



A 1 = 



r -1 

M 1 



-M 1 



, A 2 = 



-a 


a 





r 


-1 


-M 2 





M 2 


-b 



The input matrices B x and B 2 , and the membership functions are respectively 

The nominal values of (cr,r,b) are (10, 28, 8/3), and choosing [M 1? M 2 ] = [-20,30] . All 
system parameters are uncertain-but-bounded within 30% of their nominal values. 
The gain matrices for deriving the stabilizing controller (6) given by Lee, Park, and Chen 
(2001) are 

K lL = [-295.7653 -137.2603 -8.0866], K 2L = [-443.0647 -204.8089 12.6930]. 

(1) Reliability-based feasible solutions 

In order to apply the presented method, all the uncertain parameters (<j, r, b) are expressed 
as the following normalized form 

a = \0 + 3S { , r = 28 + 8.4£ 2 , & = 8/3 + 0.8<? 3 . 
Furthermore, the system matrices can be expressed as 

- V X AV 2 , A 2 = A 20 + V X AV 2 , B x = B 10 , B 2 = B 20 . 







A 


1~ Ao 


In which 








"-10 


10 





A 10 = 


28 


-1 


20 







-20 


-8/3 



-10 10 
28 -1 
30 








"1 0" 




-30 


,v 1 = 


10 


/V 2 = 


]/3_ 




1 





-3 3 

8.4 









-0.8 



A = diag{S 1 ,S 2 ,S 3 }, B 10 =B 20 =[1 of 



By solving the matrix inequalities corresponding to (63) with a* =1 , the gain matrices are 
found to be 



K a = [-84.2940 -23.7152 -2.4514], K 2 =[-84.4669 -23.6170 3.8484] 



386 



Robust Control, Theory and Applications 



The common positive definite matrix X and other feasible matrices obtained are as follows 



X = 



0.0461 


-0.0845 


0.0009 


-0.0845 


0.6627 


-0.0027 


0.0009 


-0.0027 


0.6685 



, E 1 =diag{3.4260,2.3716,1.8813} 



E 2 =diag{3. 4483,2.2766,1. 9972}, H = diag{l. 653 5,1. 9734,1 .33 18}. 

Again, by solving the matrix inequalities corresponding to (63) with a* = 2, which means 
that the allowable variation of all the uncertain parameters are within 60% of their nominal 
values, we obtain 

1^= [-123.6352 -42.4674 -4.4747], K 2 = [-125.9081 -42.8129 6.8254], 



X 



E 2 = d\ 



1.0410 -1.7704 0.0115 
-1.7704 7.7159 -0.0360 
0.0115 -0.0360 7.7771 

!101.9235,42.7451,24.7517}, H = 



, E { =^{98.7271,44.0157,22.7070}, 



1.8833,31.0026,13. 8173}. 



Clearly, the control inputs of the controllers obtained in the paper in the two cases are all 
lower than that of Lee, Park, and Chen (2001). 



20 



-20 



v\f\f\ 



*,(t) 



10 



20 



-20 



WvAAf 



xAt) 



10 




60 

40 

20 





x,(t) 



60 

40 

20 





10 



x,(t) 



Time (sec) 



Time (sec) 



10 



Fig. 2. State trajectories of the controlled nominal chaotic Lorenz system (On the left- and 
right-hand sides are results respectively of the controller of Lee, Park, and Chen (2001) and 
of the controller obtained in this paper) 



Robust Fuzzy Control of 

Parametric Uncertain Nonlinear Systems Using Robust Reliability Method 



387 



(2) Robust reliability based design of optimal controller 

Firstly, if Theorem 3.3 is used, by solving a optimization problem corresponding to (64) with 
a = 1 , the gain matrices as follows for deriving the controller are obtained 

K lG = [-20.8512 -13.5211 -3.2536], K 2G = [-21.2143 -13.1299 4.3799]. 

The norm of the gain matrices are respectively ]K 1G || = 25.0635 and |JC 2G || = 25.3303 . So, 
there exist relations 



K 1L =326.1639 = 13.0135 K 1G 



\\Kor =- 



;.2767 = 19.2764 K 2G 



To examine the effect of the controllers, the initial values of the states of the Lorenz system 
are taken as x(0) = [lO -10 -10] , the control input is activated at £=3.89s, all as that of 
Lee, Park, and Chen (2001), the simulated state trajectories of the controlled Lorenz system 
without uncertainty are shown in Fig. 2. In which, on the left- and right-hand sides are 
results of the controller of Lee, Park, and Chen (2001) and of the controller obtained in this 
paper respectively. Simulations of the corresponding control inputs are shown in Fig. 3, in 
which, the dash-dot line and the solid line represent respectively the input of the controller 
of Lee, Park, and Chen (2001) and of the controller in the paper. 



6000 



4000 



2000 




4.5 5 

Time (sec) 

Fig. 3. Control input of the two controllers (dash-dot line and solid line represent 
respectively the result of Lee, Park, and Chen (2001) and the result of the paper) 

The simulated state trajectories and phase trajectory of the controlled Lorenz system are 
shown respectively in Figs. 4 and 5, in which, all the uncertain parameters are generated 
randomly within the allowable ranges. 



388 



Robust Control, Theory and Applications 




20 



-20 



|jU| *° ■ 


NfjF 



20 



-20 



I^Aufi * 2(t) ' 


w|f 




5 
Time (sec) Time (sec) 

Fig. 4. Ten-times simulated state trajectories of the controlled chaotic Lorenz system with 
parametric uncertainties (all uncertain parameters are generated randomly within the 
allowable ranges, and on the left- and right-hand sides are respectively the results of 
controllers in Lee, Park, and Chen (2001) and in the paper) 




40 



50 



Fig. 5. Ten-times simulated phase trajectories of the parametric uncertain Lorenz system 
controlled by the presented method (all parameters are generated randomly within their 
allowable ranges) 



Robust Fuzzy Control of 

Parametric Uncertain Nonlinear Systems Using Robust Reliability Method 389 

It can be seen that the controller obtained by the presented method is effective, and the 
control effect has no evident difference with that of the controller in Lee, Park, and Chen 
(2001), but the control input of it is much lower. This shows that the presented method is 
much less conservative. 

Taking a = 3 , which means that the allowable variation of all the uncertain parameters are 
within 90% of their nominal values, by applying Theorem 3.3 and solving a corresponding 
optimization problem of (64) with a = 3 , the gain matrices for deriving the fuzzy controller 
obtained by the presented method become 

K 1G = [-54.0211 -32.5959 -6.5886], K 2G =[-50.0340 -30.6071 10.4215]. 

Obviously, the input of the controller in this case is also much lower than that of the 
controller obtained by Lee, Park, and Chen (2001). 

Secondly, when Theorem 3.4 is used, by solving two optimization problems corresponding 
to (69) with a = 1 and a = 3 respectively, the gain matrices for deriving the controller are 
found to be 

K 1G = [-20.8198 -13.5543 -3.2560], K 2G =[-21.1621 -13.1451 4.3928] (a =1). 
K 1G = [-54.0517 -32.6216 -6.6078], K 2G =[-50.0276 -30.6484 10.4362] (a = 3) 

Note that the results based on Theorem 3.4 are in agreement, approximately, with those 
based on Theorem 3.3. 

5. Conclusion 

In this chapter, stability of parametric uncertain nonlinear systems was studied from a new 
point of view. A robust reliability procedure was presented to deal with bounded 
parametric uncertainties involved in fuzzy control of nonlinear systems. In the method, the 
T-S fuzzy model was adopted for fuzzy modeling of nonlinear systems, and the parallel- 
distributed compensation (PDC) approach was used to control design. The stabilizing 
controller design of uncertain nonlinear systems were carried out by solving a set of linear 
matrix inequalities (LMIs) subjected to robust reliability for feasible solutions, or by solving 
a robust reliability based optimization problem to obtain optimal controller. In the optimal 
controller design, both the robustness with respect to uncertainties and control cost can be 
taken into account simultaneously. Formulations used for analysis and synthesis are within 
the framework of LMIs and thus can be carried out conveniently. It is demonstrated, via 
numerical simulations of control of a simple mechanical system and of the chaotic Lorenz 
system, that the presented method is much less conservative and is effective and feasible. 
Moreover, the bounds of uncertain parameters are not required strictly in the presented 
method. So, it is applicable for both the cases that the bounds of uncertain parameters are 
known and unknown. 

6. References 

Ben-Haim, Y. (1996). Robust Reliability in the Mechanical Sciences, Berlin: Spring- Verlag 
Breitung, K.; Casciati, F. & Faravelli, L. (1998). Reliability based stability analysis for actively 
controlled structures. Engineering Structures, Vol. 20, No. 3, 211-215 



390 Robust Control, Theory and Applications 

Chen, B.; Liu, X. & Tong, S. (2006). Delay-dependent stability analysis and control synthesis of 

fuzzy dynamic systems with time delay. Fuzzy Sets and Systems, Vol. 157, 2224-2240 
Crespo, L. G. & Kenny, S. P. (2005). Reliability-based control design for uncertain systems. 

Journal of Guidance, Control, and Dynamics, Vol. 28, No. 4, 649-658 
Feng, G.; Cao, S. G.; Kees, N. W. & Chak, C. K. (1997). Design of fuzzy control systems with 

guaranteed stability. Fuzzy Sets and Systems, Vol. 85, 1-10 
Guo, S. X. (2010). Robust reliability as a measure of stability of controlled dynamic systems 

with bounded uncertain parameters. Journal of Vibration and Control, Vol. 16, No. 9, 

1351-1368 
Guo, S. X. (2007). Robust reliability method for optimal guaranteed cost control of 

parametric uncertain systems. Proceedings of IEEE International Conference on Control 

and Automation, 2925-2928, Guangzhou, China 
Hong, S. K. & Langari, R. (2000). An LMI-based Hoo fuzzy control system design with TS 

framework. Information Sciences, Vol. 123, 163-179 
Lam, H. K. & Leung, F. H. F. (2007). Fuzzy controller with stability and performance rules 

for nonlinear systems. Fuzzy Sets and Systems, Vol. 158, 147-163 
Lee, H. J.; Park, J. B. & Chen, G. (2001). Robust fuzzy control of nonlinear systems with 

parametric uncertainties. IEEE Transactions on Fuzzy Systems, Vol. 9, 369-379 
Park, J.; Kim, J. & Park, D. (2001). LMI-based design of stabilizing fuzzy controllers for 

nonlinear systems described by Takagi-Sugeno fuzzy model. Fuzzy Sets and 

Systems, Vol. 122, 73-82 
Spencer, B. F.; Sain, M. K.; Kantor, J. C. & Montemagno, C. (1992). Probabilistic stability 

measures for controlled structures subject to real parameter uncertainties. Smart 

Materials and Structures, Vol. 1, 294-305 
Spencer, B. F.; Sain, M. K.; Won C. H.; et al. (1994). Reliability-based measures of structural 

control robustness. Structural Safety, Vol. 15, No. 2, 111-129 
Tanaka, K.; Ikeda, T. & Wang, H. O. (1996). Robust stabilization of a class of uncertain 

nonlinear systems via fuzzy control: quadratic stabilizability, FL control theory, and 

linear matrix inequalities. IEEE Transactions on Fuzzy Systems, Vol. 4, No. 1, 1-13 
Tanaka, K. & Sugeno, M. (1992). Stability analysis and design of fuzzy control systems. 

Fuzzy Sets and Systems, Vol. 45, 135-156 
Teixeira, M. C. M. & Zak, S. H. (1999). Stabilizing controller design for uncertain nonlinear 

systems using fuzzy models. IEEE Transactions on Fuzzy Systems, Vol. 7, 133-142 
Tuan, H. D. & Apkarian, P. (1999). Relaxation of parameterized LMIs with control 

applications. International Journal of Nonlinear Robust Control, Vol. 9, 59-84 
Tuan, H. D.; Apkarian, P. & Narikiyo, T. (2001). Parameterized linear matrix inequality 

techniques in fuzzy control system design. IEEE Transactions on Fuzzy Systems, Vol. 

9,324-333 
Venini, P. & Mariani, C. (1999). Reliability as a measure of active control effectiveness. 

Computers and Structures, Vol. 73, 465-473 
Wu, H. N. & Cai, K. Y. (2006). H2 guaranteed cost fuzzy control design for discrete-time 

nonlinear systems with parameter uncertainty. Automatica, Vol. 42, 1183-1188 
Xiu, Z. H. & Ren, G. (2005). Stability analysis and systematic design of Takagi-Sugeno fuzzy 

control systems. Fuzzy Sets and Systems, Vol. 151, 119-138 
Yoneyama, J. (2006). Robust Hoo control analysis and synthesis for Takagi-Sugeno general 

uncertain fuzzy systems. Fuzzy Sets and Systems, Vol. 157, 2205-2223 
Yoneyama, J. (2007). Robust stability and stabilization for uncertain Takagi-Sugeno fuzzy 

time-delay systems. Fuzzy Sets and Systems, Vol. 158, 115-134 



17 



A Frequency Domain Quantitative Technique 
for Robust Control System Design 

Jose Luis Guzman 1 , Jose Carlos Moreno 2 , Manuel Berenguel 3 , Francisco 

Rodriguez 4 , Julian Sanchez-Hermosilla 5 

1/2/3/4l Departamento de Lenguajes y Computation; 
5 Departamento de Ingenieria Rural, University of Almeria 

Spain 



1. Introduction 

Most control techniques require the use of a plant model during the design phase in 
order to tune the controller parameters. The mathematical models are an approximation of 
real systems and contain imperfections by several reasons: use of low-order descriptions, 
unmodelled dynamics, obtaining linear models for a specific operating point (working with 
poor performance outside of this working point), etc. Therefore, control techniques that work 
without taking into account these modelling errors, use a fixed-structure model and known 
parameters (nominal model ) supposing that the model exactly represents the real process, 
and the imperfections will be removed by means of feedback. However, there exist other 
control methods called robust control techniques which use these imperfections implicity 
during the design phase. In the robust control field such imperfections are called uncertainties, 
and instead of working only with one model (nominal model), a family of models is used 
forming the nominal model + uncertainties. The uncertainties can be classified in parametric 
or structured and non-parametric or non-structured. The first ones allow representing the 
uncertainties into the model coefficients (e.g. the value of a pole placed between maximum 
and minimum limits). The second ones represent uncertainties as unmodelled dynamics (e.g. 
differences in the orders of the model and the real system) (Morari and Zafiriou, 1989). 
The robust control technique which considers more exactly the uncertainties is the 
Quantitative Feedback Theory (QFT). It is a methodology to design robust controllers based 
on frequency domain, and was developed by Prof. Isaac Horowitz (Horowitz, 1982; Horowitz 
and Sidi, 1972; Horowitz, 1993). This technique allows designing robust controllers which 
fulfil some minimum quantitative specifications considering the presence of uncertainty in 
the plant model and the existence of perturbations. With this theory, Horowitz showed that 
the final aim of any control design must be to obtain an open-loop transfer function with 
the suitable bandwidth (cost of feedback) in order to sensitize the plant and reduce the 
perturbations. The Nichols plane is used to achieve a desired robust design over the specified 
region of plant uncertainty where the aim is to design a compensator C(s) and a prefilter F(s) 
(if it is necessary) (see Figure 1), so that performance and stability specifications are achieved 
for the family of plants. 



392 



Robust Control, Theory and Applications 



This chapter presents for SISO (Single Input Single Output) LTI (Linear Time Invariant) 
systems, a detailed description of this robust control technique and two real experiences 
where QFT has successfully applied at the University of Almeria (Spain). It starts with 
a QFT description from a theoretical point of view, afterwards section 3. 1 is devoted to 
present two well-known software tools for QFT design, and after that two real applications 
in agricultural spraying tasks and solar energy are presented. Finally, the chapter ends with 
some conclusions. 

2. Synthesis of SISO LTI uncertain feedback control systems using QFT 

QFT is a methodology to design robust controllers based on frequency domain (Horowitz, 
1993; Yaniv, 1999). This technique allows designing robust controllers which fulfil some 
quantitative specifications. The Nichols plane is the key tool for this technique and is used to 
achieve a robust design over the specified region of plant uncertainty. The aim is to design 
a compensator C(s) and a prefilter F(s) (if it is necessary), as shown in Figure 1, so that 
performance and stability specifications are achieved for the family of plants p(s) describing 
a plant P(s). Here, the notation a is used to represent the Laplace transform for a time domain 
signal a(t). 



ro 



*>- 



i> 



<> 



Fig. 1. Two degrees of freedom feedback system. 

The QFT technique uses the information of the plant uncertainty in a quantitative way, 
imposing robust tracking, robust stability, and robust attenuation specifications (among 
others). The 2DoF compensator {F, C}, from now onwards the s argument will be omitted 
when necessary for clarity, must be designed in such a way that the plant behaviour variations 
due to the uncertainties are inside of some specific tolerance margins in closed-loop. Here, the 
family p(s) is represented by the following equation 



p(s) = [P(s) 



, rg = i(s + Zi)rELi(s 2 + 2g z c«;o z + a;g z ) 

> n?=i(s + pt) nti(s 2 + 2ft"o* + ", 2 



OtJ 

max\. 



k £ [kmin'kmaxl/ z i £ [ z i,miw z i,max\> V? ^ \Vr,miw Vr,n 

bz £ [<vz,min' bz,max\/ ^Oz £ [^Oz^im ^Oz,max\/ 
£t € [£,t,min> £t,max]> ^Ot € [ w 0f,min/ ^Ot,max]/ 

n+m < a+b+ n\ 
A typical QFT design involves the following steps: 



a) 



A Frequency Domain Quantitative Technique for Robust Control System Design 



393 



1. Problem specification. The plant model with uncertainty is identified, and a set of 
working frequencies is selected based on the system bandwidth, Q ={coi,co2,---,CL>k)- 
The specifications (stability, tracking, input disturbances, output disturbances, noise, and 
control effort) for each frequency are defined, and the nominal plant Pq is selected. 

2. Templates. The quantitative information of the uncertainties is represented by a set of 
points on the Nichols plane. This set of points is called template and it defines a graphical 
representation of the uncertainty at each design frequency to. An example is shown in 
Figure 2, where templates of a second-order system given by P(s) = k/s(s + a), with 
k G [1,10] and a £ [1,10] are displayed for the following set of frequencies Q = 
{0.5, 1, 2, 4, 8, 15, 30, 60, 90, 120, 180} rad/s. 

3. Bounds. The specifications settled at the first step are translated, for each frequency cv in 
Q set, into prohibited zones on the Nichols plane for the loop transfer function Lq(jco) = 
C(jcv)Pq(jcv). These zones are defined by limits that are known as bounds. There exist so 
many bounds for each frequency as specifications are considered. So, all these bounds for 
each frequency are grouped showing an unique prohibited boundary. Figure 3 shows an 
example for stability and tracking specifications. 

TEMPLATES 




-150 



-100 (dB) 



Fig. 2. QFT Template example. 



Loop shaping. This phase consists in designing the C controller in such a way that the 
nominal loop transfer function Lq(jcv) = C(jcv)Pq(jco) fulfils the bounds calculated in the 
previous phase. Figure 3 shows the design of Lq where the bounds are fulfilled at each 
design frequency. 

Prefilter. The prefilter F is designed so that the closed-loop transfer function from reference 
to output follows the robust tracking specifications, that is, the closed-loop system 
variations must be inside of a desired tolerance range, as Figure 4 shows. 



394 



Robust Control, Theory and Applications 



NICHOLS PLOT 



^ :; =;s s : _-'■¥ 



(m 



Fig. 3. QFT Bound and Loop Shaping example. 



PRE-FILTER SHfi 




Cdl 



Fig. 4. QFT Prefilter example. 



A Frequency Domain Quantitative Technique for Robust Control System Design 



395 



6. Validation. This step is devoted to verify that the closed-loop control system fulfils, for 
the whole family of plants, and for all frequencies in the bandwith of the system, all the 
specifications given in the first step. Otherwise, new frequencies are added to the set Q, so 
that the design is repeated until such specifications are reached. 

The closed-loop specifications for system in Figure 1 are typically defined in time domain 
and /or in the frequency domain. The time domain specifications define the desired outputs 
for determined inputs, and the frequency domain specifications define in terms of frequency 
the desired characteristics for the system output for those inputs. 

In the following, these types of specifications are described and the specifications translation 
problem from time domain to frequency domain is considered. 

2.1 Time domain specifications 

Typically, the closed-loop specifications for system in Figure 1 are defined in terms of the 
system inputs and outputs. Both of them must be delimited, so that the system operates in a 
predetermined region. For example: 

1. In a regulation problem, the aim is to achieve a plant output close to zero (or nearby a 
determined operation point). For this case, the time domain specifications could define 
allowed operation regions as shown in Figures 5a and 5b, supposing that the aim is to 
achieve a plant output close to zero. 

2. In a reference tracking problem, the plant output must follow the reference input with 
determined time domain characteristics. In Figure 5c a typical specified region is shown, 
in which the system output must stay. The unit step response is a very common 
characterization, due to it combines a fast signal (an infinite change in velocity at t = + ) 
with a slow signal (it remains in a constant value after transitory). 

The classical specifications such as rise time, settling time and maximum overshoot, are special 
cases of examples in Figure 5. All these cases can be also defined in frequency domain. 

2.2 Frequency domain specifications 

The closed-loop specifications for system in Figure 1 are typically defined in terms of 
inequalities on the closed-loop transfer functions for the system, as shown in Equations (2)-(7). 

1. Disturbance rejection at the plant output: 



l + P(ja>)C(jo>) 



2. Disturbance rejection at the plant input: 



c 

J t 

3. Stability: 

4. References Tracking: 

Bi(a>) < 



P(i">) 



1 + P(jco)C(ju) 
P(jiv)C(jto) 



l + P(jcv)C(jco) 
P(jw)P{jco)C(jw 



< 5 po (co) Vw >0, VPe p 



< 8 pi {w) Vw > 0, VP 6 



< A Vo; > 0, VP G p 



l + P(ju>)C(ju>) 



< B u (to)\/iv > 0, VPe p 



(2) 



(3) 



(4) 



(5) 



396 



Robust Control, Theory and Applications 



Allowed operation region 



(a) Regulation problem 




(b) Regulation problem for other initial conditions 




0.5 



(c) Tracking problem 
Fig. 5. Specifications examples in time domain. 



4.5 5 



A Frequency Domain Quantitative Technique for Robust Control System Design 



397 



5. Noise rejection: 



6. Control effort: 



c 




ft 




it 




ft 





P{jco)C{jw) 



l + P(ja>)C(ju>) 



C(jco) 



l + P(jw)C(ju>) 



<5„(co)Vw >0, VPe p 



< Sce(oj) Vw > 0, VP 6 p 



(6) 



(7) 



For specifications in Eq. (2), (3) and (5), arbitrarily small specifications can be achieved 
designing C so that \C(jco) | — > oo (due to the appearance of the M-circle in the Nichols plot). 
So, with an arbitrarily small deviation from the steady state, due to the disturbance, and with 
a sensibility close to zero, the control system is more independent of the plant uncertainty 
Obviously, in order to achieve an increase in |C(;a?)| is necessary to increase the crossover 
frequency 1 for the system. So, to achieve arbitrarily small specifications implies to increase 
the bandwidth 2 of the system. Note that the control effort specification is defined, in this 
context, from the sensor noise n to the control signal u. In order to define this specification 
from the reference, only the closed-loop transfer function from the n signal to u signal must 
be multiplied by F precompensator. However, in QFT, it is not defined in this form because of 
F must be used with other purposes. 

On the other hand, to increase the value of \C(jcv)\ implies a problem in the case of the 
control effort specification and in the case of the sensor noise rejection, since, as was previously 
indicated, the bandwidth of the system is increased (so the sensor noise will affect the system 
performance a lot). A compromise must be achieved among the different specifications. 
The stability specification is related to the relative stability margins: phase and gain margins. 
Hence, supposing that A is the stability specification in Eq. (4), the phase margin is equal to 
2 • arcsin(0.5A) degrees, and the gain margin is equal to 20logio(l + 1/A) dB. 
The output disturbance rejection specification limits the distance from the open-loop transfer 
function L(jto) to the point ( — 1,0) in Nyquist plane, and it sets an upper limit on the 
amplification of the disturbances at the plan output. So, this type of specification is also 
adequated for relative stability. 

2.3 Translation of quantitative specifications from time to frequency domain 

As was previously indicated, QFT is a frequency domain design technique, so, when the 
specifications are given in the time domain (typically in terms of the unit step response), it 
is necessary to translate them to frequency domain. One way to do it is to assume a model for 
the transfer function T cr , closed-loop transfer function from reference r to the output c, and to 
find values for its parameters so that the defined time domain limits over the system output 
are satisfied. 



2.3.1 A first-order model 

Lets consider the simplest case, a first-order model given by T cr (s) - 
r(t) is an unit step the system output is given by c(t) = (K/a)(l ■ 
reach c(t) = r(t) for a time t large enough, K should be K = a. 



K/ (s + a), so that when 
- e~ at ). Then, in order to 



1 The crossover frequency for a system is defined as the frequency in rad/s such that the magnitude of 
the open-loop transfer function L(jco) — P(jco)C(jco) is equal to zero decibels (dB). 

2 The band with of a system is defined as the value of the frequency a^ in rad/s such that 
\Tcr{j&b) /Tcr{Q)\dB = "3 dB, where T cr is the closed-loop transfer function from the reference r to the 
output c. 



398 



Robust Control, Theory and Applications 



For a first-order model t c = 1/a = l/co^is the time constant (represents the time it takes the 
system step response to reach 63.2% of its final value). In general, the greater the bandwith is, 
the faster the system output will be. 

One important difficulty for a first-order model considered is that the first derivative for the 
output (in time infinitesimaly after zero, t = + ) is c = K, when it would be desirable to be 0. 
So, problems appear at the neighborhood of time t = 0. In Figure 6 typical specified time limits 
(from Eq. (5) Bj and B u are the magnitudes of the frequency response for these time domain 
limits) and the system output are shown when a first-order model is used. As observed, 
problems appear at the neighborhood of time t = 0. On the other hand the first-order model 
does not allow any overshoot, so from the specified time limits the first order model would 
be very conservative. Hence, a more complex model must be used for the closed-loop transfer 
function T rr . 




Fig. 6. Inadequate first-order model. 



2.3.2 A second-order model 

In this case, two free parameters are available (assuming unit static gain): the damping factor 
£ and the natural frequency oo n (rad/s). The model is given by 



T(s) 



s 2 + 2^co n s + col 



(8) 



The unit step response, depending on the value of £, is given by 



A Frequency Domain Quantitative Technique for Robust Control System Design 399 



c(t) 



l-e-^*(cos(w ny /T^t) + - S%_ p sm(u>„y/T=Ft)) if £ < 1 



1 _ e -^(cosh(aW£ 2 " If) + g % , smh{w„jl^?t)) if J > 1 

w n yg 1 

l-e-^(l + oV) if g" = 1 



In practice, the step response for a system usually has more terms, but normally it contains 
a dominant second-order component with £ < 1. The second-order model is very popular in 
control system design in spite of its simplicity, because of it is applicable to a large number of 
systems. The most important time domain indexes for a second-order model are: overshoot, 
settling time, rise time, damping factor and natural frequency In frequency domain, its most 
important indexes are: resonance peak (related with the damping factor and the overshoot), 
resonance frequency (related with the natural frequency), and the bandwidth (related with 
the rise time). The resonance peak is defined as m % x \T cr (jco) | = Mp. The resonance frequency 
cop is defined as the frequency at which \T cr (jcOp)\ = Mp. One way to control the overshoot 
is setting an upper limit over Mp. For example, if this limit is fixed on 3 dB, and the practical 
Tcr(jto) for co in the frequency range of interest is ruled by a pair of complex conjugated poles, 
then this constrain assures an overshoot lower than 27%. 

In (Horowitz, 1993) tables with these relations are proposed, where, based on the experience of 
Professor Horowitz, makes to set a second-order model to be located inside the allowed zone 
defined by the possible specifications. As Horowitz suggested in his book, if the magnitude of 
the closed-loop transfer function T cr is located between frequency domain limits B u (co) and 
B\{co) in Eq. (5), then the time domain response is located between the corresponding time 
domain specifications, or at most it would be satisfied them in a very approximated way. 

2.3.3 A third-order model with a zero 

A third-order model with a unit static gain is given by 

T(s) = ^ (9) 

(s 2 + li^COnS + col) (s + }iCO n ) 

For values of \i less than 5, a similar behaviour as if the pole is not added to the second-order 
model is obtained . So, the model in Eq. (8) would must be used. 
If a zero is added to Eq. (9), it results 

T(s) = (l + s/^to n )jico 3 n (1Q) 

(s 2 + 2£,co n s + col) (s + ]ico n ) 

The unit responses obtained in this case are shown in Figure 7 for different values of A. 
As shown in Figure 7, this model implies an improvement with respect to that in Eq. (8), 
because of it is possible to reduce the rise time without increasing the overshoot. Obviously, if 
co n > 1, then the response is co n times faster than the case with co n = 1 (slower for co n < 1). In 
(Horowitz, 1993), several tables are proposed relating parameters in Eq. (10) with time domain 
parameters as overshoot, rise time and settling time. 



400 



Robust Control, Theory and Applications 




Fig. 7. Third-order model with a zero for ]i = 5 and f = 1. 

There exist other techniques to translate specifications from time domain to frequency 
domain, such as model-based techniques, where based on the structures of the plant and 
the controller, a set of allowed responses is defined. Another technique is that presented in 
(Krishnan and Cruickshanks, 1977), where the time domain specifications are formulated as 
Jo |c(r) — m(r)\ 2 dT < J v 2 (r)dT, with m(t) and v(t) specified time domain functions, and 
where it is established that the energy of the signal, difference between the system output and 
the specification m(t), must be enclosed by the energy of the signal v(t), for each instant t, and 
with a translation to the frequency domain given by the inequality \c(jco) — rh(jco) \ < \v(jcv) | . 
In (Pritchard and Wigdorowitz, 1996) and (Pritchard and Wigdorowitz, 1997), the relation 
time-frequency is studied when uncertainty is included in the system, so that it is possible 
to know the time domain limits for the system response from frequency response of a set 
of closed-loop transfer functions from reference to the output. This technique may be used 
to solve the time-frequency translation problem. However, the results obtained in translation 
from frequency to time and from time to frequency are very conservative. 



2.4 Controller design 

Now, the procedure previously introduced is explained more in detail. The aim is to design 
the 2DoF controller {F, G} in Figure 1, so that a subset of specifications introduced in section 
2.2is satisfied, and the stability of the closed-loop system for all plant P in p is assured. 
The specifications in section 2. 2are translated in circles on Nyquist plane defining allowed 
zones for the function L(jcv) = P(jcv)C(jcv). The allowed zone is the outside of the circle for 
specifications in Eq. (2)-(6), and the inside one for the specification in Eq. (7). Combining the 
allowed zones for each function L corresponding to each plant P in p, a set of restrictions for 
controller C for each frequency cv is obtained. The limits of these zones represented in Nichols 



A Frequency Domain Quantitative Technique for Robust Control System Design 



401 



plane are called bounds or boundaries. These constrains in frequency domain can be formulated 
over controller C or over function Lq = PqC, for any plant Pq in p (so-called nominal plant). 
In order to explain the detailed design process, the following example, from (Horowitz, 1993), 
is used. Lets suppose the plant in Figure 1 given by 



P- 



h(s) = — with/cG [1,20] and a e [1,5]} 



(11) 



corresponding to a range of motors and loads, where the equation modeling the motor 
dynamic is Jc + Be = Ku, with k = K/J and a = B/J in Eq. (11). Lets suppose the tracking 
specifications given by 



Bi(cv) < \T cr (jcv)\ dB 



F(jcv)P(jcv)C(jcv) 



l + P(jcv)C(jcv) 



< B u [w) VP e pMco > 



(12) 



dB 



shown in Figure 8. In Figure 9, the difference S(cv) = B u (co) — B/(o?) is shown for each 
frequency a;. It is easy to see that in order to satisfy the specifications in Eq. (12), the following 
inequality must be satisfied 



A\T cr (jcv)\ dB =max 

Pep 



P(jcv)C(jcv) 



l + P(jcv)C(jcv) 



— mm 

dB P ^P 



P(jcv)C(jcv) 



l + P(jcv)C(jcv) 






< S(w) = B u (w)-Bi((v)W e pVo; > 



(13) 



B (go) enlarged 




Frequency (rad/s) 

Fig. 8. Tracking specifications (variations over a nominal). 



402 



Robust Control, Theory and Applications 




Frequency (rad/s) 

Fig. 9. Specifications on the magnitude variations for the tracking problem. 



Making L = PC large enough, for each plant P in p, and for a frequency to, it is possible 
to achieve an arbitrarily small specification S(cv). However, this is not possible in practice, 
since the system bandwidth must be limited in order to minimize the influence of the sensor 
noise at the plant input. When C has been designed to satisfy the specifications in Eq. (13), the 
second degree of freedom, F, is used to locate those variations inside magnitude limits B\{co) 
and B u (cv). 

In order to design the first degree of freedom, C, it is necessary to define a set of constrains on 
C or on Lq in the frequency domain, what guarantee that if C (respectively Lq) satisfies those 
restrictions then the specifications are satisfied too. As commented above, these constrains are 
called bounds or boundaries in QFT, and in order to compute them it is necessary to take into 
account: 

(i) A set of specifications in frequency domain, that in the case of tracking problem, are given 
by Eq. (13), and that in other cases (disturbance rejection, control effort, sensor noise,...) are 
similar as shown in section 2.2 

(ii) An object (representation) modeling the plant uncertainty in frequency domain, so-called 

template. 

The following sections explain more in detail the meaning of the templates and the bounds. 

Computation of basic graphical elements to deal with uncertainties: templates 

If there is no uncertainty in plant, the set p would contain only one transfer function, P, and 
for a frequency, to, P(jcv) would be a point in the Nichols plane. Due to the uncertainty, a set 
of points, for each frequency, appears in the Nichols plane. One point for each plant P in p. 
These sets are called templates. For example, Figure 10 shows the template for to = 2 rad/s, 
corresponding to the set: 



A Frequency Domain Quantitative Technique for Robust Control System Design 



403 





D 


i i i 




10 


_ 




k=20 


E 


_ 














F 




5 


- 


. 










" 





- 




CM 

n 




CO 




" 


-5 


- 




k=2 








" 


-10 


- 












" 


-15 


- / 


V 


k=1 








" 


-20 


- 


B 




- 






C 





-155 -150 



-145 -140 -135 -130 -125 -120 

Angle(P) - degrees 



Fig. 10. Template for frequency co = 2 rad/s and the plant given by Eq. (11). 



3(a; 



ke [1,20] and a e [1,5] 



.2/(2;' + a) 

For k = 1 and driving a from 1 to 5, the segment ABC is obtained in Figure 10. For a = 3 and 
driving k from 1 to 20, the segment BE is calculated. For k = 20 and driving a from 1 to 5, the 
segment DEF is obtained. 

Choosing a plant Pq belonging to the set p, the nominal open-loop transfer function is defined 
as Lq = PqC. In order to shift a template in the Nichols plane, a quantity must be added in 
phase (degrees) and other quantity in magnitude (decibels) to all points. Using the nominal 
point Po(j to) as representative of the full template at frequency co and shaping the value of the 
nominal Lq(jco) = Pq(jco)C(jco) using C(jco), it is equivalent to add \C(jco)\^ B in magnitude 
and Angle(C(jco)) degrees in phase to each point P(joo) (with magnitude in decibels and 
phase in degrees) inside the template at frequency co. So, the shaping of the nominal open-loop 
transfer function at frequency co (using the degree of freedom C), is equivalent to shift the 
template at that frequency a; to a specific location in the Nichols plane. 

The choice of the nominal plant for a template is totally free. The design method is valid 
independently of this choice. However, there exist rules for the more adequate choice in 
specific situations (Horowitz, 1993). 

As was previously indicated, there exists a template for each frequency, so that after the 
definition of the specifications for the control problem, the following step is to define a set 
of design frequencies Q. Then, the templates would be computed for each frequency co in Q. 
Once the specifications have been defined and the templates have been computed, the third 
step is the computation of boundaries using these graphical objects and the specifications. 



404 Robust Control, Theory and Applications 

Derivation of boundaries from templates and specifications 

Now, zones on Nichols plane are defined for each frequency cv in Q, so that if the nominal of 
the template shifted by C(jco) is located inside that zone, then the specifications are satisfied. 
For each specification in section 2. 2 and for each frequency cv in O, using the template and 
the corresponding specification, the boundary must be computed. Details about the different 
types of bounds and the most important algorithms to compute them can be found in (Moreno 
et al., 2006). In general, a boundary at frequency cv defines a limit of a zone on Nichols plane 
so that if the nominal Lq(jcv) of the shifted template is located inside that zone, then some 
specifications are satisfied. So, the most single appearance of a boundary defines a threshold 
value in magnitude for each phase cp in the Nichols plane, so that if Angle(Lo(jcv)) = cp, then 
|Lo(;o;)|^g must be located above (or below depending on the type of specification used to 
compute the boundary) that threshold value. 

It is important to note that sometimes redefinition of the specifications is necessary. For 
example, for system in Eq. (11), for cv > 10 rad/s the templates have similar dimensions, and 
the specifications from Eq. (13) in Figure 9 are identical. Then, the boundaries for cv > 10 rad/s 
will be almost identical. The function Lq(jcv) must be above the boundaries for all frequencies, 
including cv > 10 rad/s, but this is unviable due to it must be satisfied that Lq(jcv) — > 
when cv —> oo. Therefore, it is necessary to open the tracking specifications for high frequency 
(where furthermore the uncertainty is greater), such as it is shown in Figure 8. On the other 
hand, it must be also taken into account that for a large enough frequency cv, the specification 

S(cv) in Eq. (13) must be greater or equal than p|p \P{jcv)\^— p^ \P{jcv)\^ such that, for 
a small value of Lq(jcv) for these frequencies, the specifications are also satisfied. The effect 
of this enlargement for the specifcations is negligible when the modifications are introduced 
at a frequency large enough. These effects are notable in the response at the neighborhood of 
t = 0. 

Considering the tracking bounds as negligible from a specific frequency (in the sense that 
the specification is large enough), it implies that the stability boundaries are the dominant 
ones at these frequencies. As was mentioned above, since the templates are almost identical at 
high frequencies and the stability specification A is independent of the frequency, the stability 
bounds are also identical and only one of them can be used as representative of the rest. In 
QFT, this boundary is usually called high frequency bound, and it is denoted by B^. 
Notice that the use of a discrete set of design frequencies Q does not imply any problem. 
The variation of the specifications and the variation of the appearance of the templates from a 
frequency cv~ to a frequency cv + , with cv~ < cv < cv+ , is smooth. Anyway, the methodology 
let us discern the specific cases in which the number of elements of O is insufficient, and let 
us iterate in the design process to incorporate the boundaries for those new frequencies, then 
reshaping again the compensator {¥, C}. 

Design of the nominal open-loop transfer function fulfilling the boundaries 

In this stage, the function Lq (jcv) must be shaped fulfilling all the boundaries for each frequency. 
Furthermore, It must assure that the transfer function 1 + L(s) has no zeros in the right half 
plane for any plant P in p. So, initially Lq = Pq (C = 1) and poles and zeros are added to this 
function (poles and zeros of the controller C) in order to satisfy all of these restrictions on the 
Nichols plane. In this stage, only using the function Lq, it is possible to assure the fulfillment of 
the specifications for all of the elements in the set p when Lq(jcv) is located inside the allowed 
zones defined by the boundary at frequency cv (computed from the corresponding template at 
that frequency, and from the specifications). 



A Frequency Domain Quantitative Technique for Robust Control System Design 



405 



Obviously, there exists an infinite number of acceptable functions Lq satisfying the boundaries 
and the stability condition. In order to choose among all of these functions, an important factor 
to be considered is the sensor noise effect at the plant input. The closed-loop transfer function 
from noise n to the plant input u is given by 



Tun(s) 



-L(s)/P(s) 

l + P(s)C(s) 1 + L(s) 



-C(s) 



In the range of frequencies in which \L(jco)\ is large (generally low frequency), \T un (jco)\ — > 
\l/P(jco)\, so that the value of \T un (jco)\ at low frequency is independent on the design 
chosen for L. In the range of frequencies where \L{jco)\ is small (generally high frequency), 
\T un (jco)\ —> \G(jco)\. These two asymptotes cross between themselves at the crossover 
frequency. 

In order to reduce the influence of the sensor noise at the plant input, \C(jco)\ — ► when 
co — > oo must be guaranteed. It is equivalent to say that \Lq(jco)\ must be reduced as fast 
as possible at high frequency. A conditionally stable 3 design for Lq is especially adequate to 
achieve this objective. However, as it is shown in (Moreno et al., 2010) this type of designs 
supposes a problem when there exists a saturation non-linearity type in the system. 



Design of the prefilter 

At this point, only the second degree of freedom, F, must be shaped. The controller C, 
designed in the previous step, only guarantees that the specifications in Eq. (13) are satisfied, 
but not the specifications in Eq. (12). Using F, it is possible to guarantee that the specifications 
in Eq. (12) are satisfied when with C the specifications in Eq. (13) are assured. 
In order to design F, the most common method consists of computing for each frequency co 
the following limits 



Fu(co) 



max 

Pep 



P(jco)C(jco) 



l + P(jco)C(jco) 






B u (co) 



and 



Fi(co) 



mm 

Pep 



P{jw)C{jw) 



l + P(ju>)C(jw) 






B,(to) 



and shaping F adding poles and zeros until Fi(co) < \F(jco)\ < F u (co) for all frequency co in 
Q. 

Validation of the design 

This is the last step in the design process and consists in studying the magnitude of the 
different closed-loop transfer functions, checking if the specifications for frequencies outside 
of the set Q are satisfied. If any specification is not satisfied for a specific frequency, co v , 
then this frequency is added to the set Q, and the corresponding template and boundary are 



3 A system is conditionally stable if a gain reduction of the open-loop transfer function L drives the 
closed-loop poles to the right half plane. 



406 Robust Control, Theory and Applications 

computed for that frequency to p. Then, the function Lq is reshaped, so that the new restriction 
is satisfied. Afterwards, the precompensator F is reshaped, and finally the new design is 
validated. So, an iterative procedure is followed until the validation result is satisfactory. 

3. Computer-based tools for QFT 

As it has been described in the previous section, the QFT framework evolves several 
stages, where a continuous re-design process must be followed. Furthermore, there are some 
steps requiring the use of algorithms to calculate the corresponding parameters. Therefore, 
computer-based tools as support for the QFT methodology are highly valuable to help in 
the design procedure. This section briefly describes the most well-known tools available in 
the literature, The Matlab QFT Toolbox (Borghesani et al, 2003) and SISO-QFTIT (Diaz et al., 
2005a),(Diaz et al, 2005b). 

3.1 Matlab QFT toolbox 

The QFT Frequency Domain Control Design Toolbox is a commercial collection of Matlab 
functions for designing robust feedback systems using QFT, supported by the company 
Terasoft, Inc (Borghesani et al., 2003). The QFT Toolbox includes a convenient GUI that 
facilitates classical loop shaping of controllers to meet design requirements in the face of 
plant uncertainty and disturbances. The interactive GUI for shaping controllers provides 
a point-click interface for loop shaping using classical frequency domain concepts. The 
toolbox also includes powerful bound computation routines which help in the conversion of 
closed-loop specifications into boundaries on the open-loop transfer function (Borghesani et al., 
2003). 

The toolbox is used as a combination of Matlab functions and graphical interfaces to perform 
a complete QFT design. The best way to do that is to create a Matlab script including all the 
required calls to the corresponding functions. The following lines briefly describe the main 
steps and functions to use, where an example presented in (Borghesani et al., 2003) is followed 
for a better understanding (a more detailed description can be found in (Borghesani et al., 
2003)). 
The example to follow is described by: 

P = { P ^ = ( s + a )( s + b ) : k = [1*2,5,8, 10],« = [1,3,5],*= [20,25,30]}. (14) 

Once the process and the associated uncertainties are defined, the different steps, explained 
in section 2 y to design the robust control scheme using the QFT toolbox are described in the 
following: 

• Template computation. First, the transfer function models representing the process 
uncertainty must be written. The following code calculates a matrix of 40 plant elements 
which is stored in the variable P and represents the system defined by Eq. (14). 
» c = 1; k = 10; b = 20; 
» for a = linspace(l,5,10), 

P(l,l,c) = tf(k,[l,a+b,a*b]); c = c + 1; 
» end 

» k = 1; b = 30; 
» for a = linspace(l,5,10), 



A Frequency Domain Quantitative Technique for Robust Control System Design 



407 



P(l,l,c) = tf(k,[l,a+b,a*b]); c = c + 1; 

» end 

» b = 30; a = 5; 

» for k = linspace(l,10,10), 

P(l,l,c) = tf(k, [l,a+b,a*b]); c = c + 1; 

» end 

» b = 20; a = 1; 

» for k = linspace(l,10,10), 

P(l,l,c) = tf(k, [l,a+b,a*b]); c = c + 1; 

» end 
Then, the nominal element is selected: 

» nompt=21; 
and the frequency array is set: 

» w = [0.1, 5, 10, 100]; 
Finally, the templates are calculated and visualized using the plottmpl function (see 
(Borghesani et al., 2003) for a detailed explanation): 

» plottmpl(w,P,nompt); 
obtaining the templates shown in Figure 11. 




00 



Open-Lug Phase- (de^'i 



Fig. 11. Matlab QFT Toolbox. Templates for example in Eq. (14) 

• Specifications. In this step, the system specifications must be defined according to Eq. (2)-(7). 
Once the specifications are determined, the corresponding bounds on the Nichols plane are 
computed. The following source code shows the use of specifications in Eq. (2)-(4) for this 
example. 

A stability specification of A = 1.2 in Eq. (4) corresponding to a gain margin (GM) > 5.3 
dB and a phase margin (PM) = 49.25 degrees is given: 

» Wsl = 1.2; 
Then, the stability bounds are computed using the function sisobnds (see (Borghesani et al., 
2003) for a detailed explanation) and its value is stored in the variable bdbl: 



408 



Robust Control, Theory and Applications 



» bdbl = sisobnds(l,w,Wsl,P,0,nompt); 
Lets now consider the specifications for output and input disturbance rejection cases, from 
Eq. (2)-(3). For the case of the output disturbance specification, the performance weight for 
the bandwidth [0,10] is defined as 

» Ws2 = tf(0.02*[l,64,748,2400],[l,14.4,169]); 
and the bounds are computed in the following way 

» bdb2 = sisobnds(2,w(l:3),Ws2,P,0,nompt); 
For the input disturbance case, the specification is defined as constant for 

» Ws3 = 0.01; 
calculating the bounds as 

» bdb3 = sisobnds(3,w(l:3),Ws3,P,0,nompt); 
also for the bandwidth [0,10]. 

Once the specifications are defined and the corresponding bounds are calculated. For each 
frequency they can be combined using the following functions: 

» bdb = grpbnds(bdbl,bdb2,bdb3); // Making a global structure 

» ubdb = sectbnds(bdb); / / Combining bounds 
The resulting bounds which will be used for the loop-shaping stage are shown in Figure 12. 
This figure is obtained using the plotbnds function: 

» plotbnds(ubdb); 



it v-i :ii-i.yB<M^Ja 




-225 -IB 



Fig. 12. Matlab QFT Toolbox. Boundaries for example (14) 

• Loop-shaping. After obtaining the stability and performance bounds, the next step consists in 
designing (loop shaping) the controller. The QFT toolbox includes a graphical interactive 
GUI, Ipshape, which helps to perform this task in an straightforward way. Before using 
this function, it is necessary to define the frequency array for loop shaping, the nominal 
plant, and the initial controller transfer function. Therefore, these variables must be set 
previously, where for this example are given by: 

» wl = logspace(-2,3,100); // frequency array for loop shaping 

» CO = tf(l,l); // Initial Controller 



A Frequency Domain Quantitative Technique for Robust Control System Design 



409 



» L0=P(l,l,nompt)*C0; // Nominal open-loop transfer function 
Having defined these variables, the graphical interface is opened using the following line: 

» lpshape(wl,ubdb,LO,CO); 
obtaining the window shown in Figure 13. As shown from this figure, the GUI allows to 
modify the control transfer functions adding, modifying, and removing poles and zeros. 
This task can be done from the options available at the right area of the windows or 
dragging interactively on the loop Lq(s) = Pq(s)C(s) represented by the black line on 
the Nichols plane. 
For this example, the final controller is given by (Borghesani et al., 2003) 



C(s) 



379(^ + 1) 

_£_ _|_ _s_ _|_1 

247 2 ^ 247 ^ 1 



(15) 



1-lDlxl 



Open-Loop: -1.62 deg, -1 7.64 dB 
Closed-Loop: -1.43 deg, -18.71 <i 
-lijuin:'/. Varad/sec 
<? Rad/Sec C Hert: 







Fig. 13. Matlab QFT Toolbox. Loop shaping for example in Eq. (14) 

• Pre-filter design. When the control design requires tracking of reference signals, although 
this is not the case for this example, a pre-filter F(s) must be used in addition to 
the controller C(s) such as discussed in section 2.. The prefilter can be also designed 
interactively using a graphical interface similar to that described for the loop shaping stage. 
To run this option, the pf shape function must be used (see (Borghesani et al., 2003) for more 
details). 

• Validation. The control system validation can be done testing the resulting robust controller 
for all uncertain plants defined by Eq. (14) and checking that the different specifications 
are fulfilled for all of them. This task can be performed directly programming in Matlab or 
using the chksiso function from the QFT toolbox. 



3.2 An interactive tool based in Sysquake: SISO-QFTIT 

SISO-QFTIT is a free software interactive tool for robust control design using the QFT 
methodology (Diaz et al., 2005a;b). The main advantages of SISO-QFTIT compared to other 
existing tools are its easiness of use and its interactive nature. In the tool described in the 
previous section, a combination between code and graphical interfaces must be used, where 



41 Robust Control, Theory and Applications 

some interactive features are also provided for the loop shaping and filter design stages. 
However, with SISO-QFTIT all the stages are available from an interactive point of view. 
As commented above, the tool has been implemented in Sysquake, a Matlab-like language 
with fast execution and excellent facilities for interactive graphics (Piguet, 2004). Windows, 
Mac, and Linux operating systems are supported. Since this tool is completely interactive, one 
consideration that must be kept in mind is that the tool's main feature -interactivity- cannot 
be easily illustrated in a written text. Thus, the reader is cordially invited to experience the 
interactive features of the tool. 

The users mainly should operate with only mouse operations on different elements in the 
window of the application or text insertion in dialog boxes. The actions that they carry out are 
reflected instantly in all the graphics in the screen. In this way the users take aware visually 
of the effects that produce their actions on the design that they are carrying out. This tool is 
specially conceived as much as for beginner users that want to learn the QFT methodology, as 
for expert users (Diaz et al., 2005b). 

The user can work with SISO-QFTIT in two different but not excluding ways (Diaz et al., 
2005b): 

• Interactive mode. In this work form, the user selects an element in the window and drags 
it to take it to a certain value, their actions on this element are reflected simultaneously on 
all the present figures in the window of the tool. 

• Dialogue mode. In this work form, the user should simply go selecting entrances of the 
Settings menu and correctly fill the blanks of dialog boxes. 

Such as commented in the manual of this interactive software tool, its main interactive 
advantages and options are the following (Diaz et al., 2005b): 

• Variations that take place in the templates when modifying the uncertainty of the different 
elements of the plant or in the value of the template calculation frequency. 

• Individual or combined variation on the bounds as a result of the configuration of 
specifications, i.e., by adding zeros and poles to the different specifications. 

• The movement of the controller zeros and poles over the complex plane and the 
modification of its symbolic transfer function when the open loop transfer function is 
modified in the Nichols plane. 

• The change of shape of the open loop transfer function in the Nichols plane and the 
variation of the expression of the controller transfer function when any movement, 
addition or suppression of its zeros or poles in the complex plane. 

• The changes that take place in the time domain representation of the manipulated and 
controlled variables due to the modification of the nominal values of the different elements 
of the plant. 

• The changes that take place in the time domain representation of the manipulated and 
controlled variables due to the introduction of a step perturbation at the input of the plant. 
The magnitude and the occurrence instant of the perturbation is configured by the user by 
means of the mouse. 

Such as pointed out above, the interactive capabilities of the tool cannot be shown in a 
written text. However, some screenshots for the example used with the Matlab QFT toolbox 
are provided. Figure 14a shows the resulting templates for the process defined by Eq. (14). 



A Frequency Domain Quantitative Technique for Robust Control System Design 



411 



Notice that with this tool, the frequencies, the process uncertainties and the nominal plant 
can be interactively modified. The stability bounds are shown in Figure 14b. The radiobuttons 
available at the top-right side of the tool allow to choose the desired specification. Once the 
specification is selected, the rest of the screen is changed to include the specification values 
in an interactive way. Figure 15a displays the loop shaping stage with the combination of 
the different bounds (same result than in Figure 13). The figure also shows the resulting loop 
shaping for controller (15). Then, the validation screen is shown in Figure 15b, where it is 
possible to check interactively if the robust control design satisfies the specifications for all 
uncertain cases. Although for this example it is not necessary to design the pre-filter for the 
tracking specifications, this tool also provides a screen where it is possible to perform this task 
(see an example in Figure 16). 



i| ||EH|ft-fl|S| 



IH^|R^--1 



-J 1- 



-4 A- 






IT 



© 



(a) QFT Templates (b) Stability bounds 

Fig. 14. SISO-QFTIT. Templates and bounds for the example described in Eq. (14) 



-i ir^i^i 



~T 




8= 


sd 


'l 

• •V, 


0/m 




(a) Loop shaping (b) Validation 

Fig. 15. SISO-QFTIT. Loop shaping and validation for the example described in Eq. (14) 



4. Practical applications 

This section presents two industrial projects where the QFT technique has been successfully 
used. The first one is focused on the pressure control of a mobile robot which was design 



412 



Robust Control, Theory and Applications 




Fig. 16. SISO-QFTIT. Prefilter stage 

for spraying tasks in greenhouses (Guzman et al., 2008). The second one deals with the 
temperature control of a solar collector field (Cirre et al., 2010). 



4.1 In agricultural and robotics context: Fitorobot 

During the last six years, the Automatic Control, Robotics and Electronics research group 
and the Agricultural Engineering Department, both from the University of Almeria (Spain), 
have been working in a project aimed at designing, implementation, and testing a multi-use 
autonomous vehicle with safe, efficient, and economic operation which moves through the 
crop lines of a greenhouse and which performs tasks that are tedious and /or hazardous for 
people. This robot has been called Fitorobot. The first version of this vehicle has been equipped 
for spraying activities, but other configurations have also been designed, such as: a lifting 
platform to reach high zones to perform tasks (staking, cleaning leaves, harvesting, manual 
pollination, etc.), and a forklift to transport and raise heavy materials (Sanchez-Gimeno et al., 
2006). This mobile robot was designed and built following the paradigm of Mechatronics such 
as described in (Sanchez-Hermosilla. et al., 2010). 

The first objective of the project consisted of developing a prototype to enable the spraying 
of a certain volume of chemical products per hectare while controlling the different variables 
that affect the spraying system (pressure, flow, and travel speed). The pressure is selected and 
the control signal keeps the spraying conditions constant (mainly droplet size). The reference 
value of the pressure is calculated based on the mobile robot speed and the volume of pesticide 
to apply, where the pressure working range is between 5 and 15 bar. 

There are some circumstances where it is impossible to maintain a constant velocity due 
to the irregularities of the soil, different slopes of the ground, and the turning movements 
between the crop lines. Thus, for work at a variable velocity (Guzman et al., 2008), it is 
necessary to spray using a variable-pressure system based on the vehicle velocity, which is 
the proposal adopted and implemented in this work. This system presents some advantages, 
such as the higher quality of the process, because the product sprayed over each plant is 
optimal. Furthermore, this system saves chemical products because an optimal quantity is 
sprayed, reducing the environmental impact and pollution as the volume sprayed to the air is 
minimized. 

The robot prototype (Figure 17) consists of an autonomous mobile platform with a rubber 
tracked system and differential guidance mechanism (to achieve a more homogeneous 



A Frequency Domain Quantitative Technique for Robust Control System Design 



413 



distribution of soil-compaction pressure, thus disturbing less the sandy soil typical of 
Mediterranean greenhouses (Sanchez-Gimeno et al., 2006)). The robot is driven by hydraulic 
motors fed by two variable displacement pumps powered by a 20-HP gasoline motor, 
allowing a maximum velocity of 2.9 m/s. Due to the restrictions imposed by the narrow 
greenhouse lanes, the vehicle dimensions are 70 cm width, 170 cm length, and 180 cm height 
at the top of the nozzles. 




Fig. 17. Mobile robot for agricultural tasks 




ON / OFF 
ELECTROVALVE 



PIPES WITH PRESSURE 



GROUP OF PRESSURE 
ENGINE 



PIPES WITHOUT PRESSURE 



Fig. 18. Scheme of the spraying system 



414 



Robust Control, Theory and Applications 



The spraying system carried out by the mobile robot is composed with a 300 1 tank used to 
store the chemical products, a vertical boom sprayer with 10 nozzles, an on /off electrovalve 
to activate the spraying, a proportional electrovalve to regulate the output pressure, a 
double-membrane pump with pressure accumulator providing a maximum flow of 30 1/min 
and a maximum pressure of 30 bar, and a pressure sensor to close the control loop as shown 
in Figure 18. 

In this case, the control problem was focused on regulating the output pressure of the 
spraying system mounted on the mobile robot despite changes in the vehicle velocity and 
the nonlinearities of the process. 

For an adequate control system design, it was necessary to model the plant by obtaining its 
associated parameters. Several open-loop step-based tests were performed varying the valve 
aperture around a particular operating point. The results showed that the system dynamics 
can be approximated by a first-order system with delay. Thus, it can be modelled using the 
following transfer function 



P(s) 



(16) 



TS + 1 

where k is the static gain, t r is the delay time, and t is the time constant. 

Then, several experiments in open loop were performed to design the dynamic model of 
the spraying system using different amplitude opening steps (5% and 10%) over the same 
operating points (see Figure 19a). The analysis of the results showed that the output-pressure 
behavior changes when different valve-amplitude steps are produced around the same 
working point, and also when the same valve opening steps are produced at several operating 
points, confirming the uncertainty and nonlinear characteristics of the system. 









Opening 5 % 












Filtered signal 








"i 








! \& 


s 1G 

z 
a 

9 10 












£ 


\\ 


L 






u i 


" 




^ 






_-^ i 











50 100 150 200 250 300 350 400 450 
Time (s) 




(a) Time domain (b) Frequency domain 

Fig. 19. System uncertainties from the time and frequency domains 

After analyzing the results (see Figure 19a), the system was modelled as a first-order 
dynamical system with uncertain parameters, where the reaction curve method has been 
used at the different operating points. Therefore, the resulting uncertain model is given by 
the following transfer function (see Figure 19b): 



{P(s 



TS + 



- : k e [-0.572, -0.150], r e [0.4,1]} 



(17) 



A Frequency Domain Quantitative Technique for Robust Control System Design 



415 



where the gain, k, is given in bar/% aperture and the constant time, t , in seconds. 

Once the system was characterized, the robust control design using QFT was performed 

considering specifications on stability and tracking. 

First, the specifications for each frequency were defined, and the nominal plant Pq was 

selected. The set of frequencies and the nominal plant were set to Q = {0.1, 1,2, 10} rad/s 

and Pq = oTi^PT' respectively The stability specification was set to A = 1.2 corresponding 

to a GM > 5.3 dB and a PM = 49.25, and for the tracking specifications the maximum and 

minimum values for the magnitude have been described by the following transfer functions 

(frequency response for tracking specifications are shown in Figure 19b in dashed lines) 



Bi(s) 



10 



B u (s) 



12.25 



(18) 



s + 10' ww s 2 + 8.75s + 12.25 
Figure 20a shows the different templates of the plant for the set of frequencies determined 
above. 













i '• i < : o.j 


I 

r 




i i 

i %£ 




-1- 


o° : -$"% : c": ;""^y 

2 I ° jo | "! | o o 

. j.__P___; 9 \ o_; i---S-o- 
















} \ 









-^~ 


:I 













In 

3- :: 


_„i l^r.-^y^^r^T^r^r^^}!^?L 


£ 


; ; /"f . 1 ; ; 1t\ ; 


I 

> 


1 ) Itl f"\ 



















Phase Idegreesl 



(a) QFT Templates 



(b) Loopshaping stage 



Fig. 20. Templates and feedback controller design by QFT 

The specifications are translated to the boundaries on the Nichols plane for the loop-transfer 
function L(jco) = C(jw)P(j to). Figure 20b shows the different bounds for stability and tracking 
specifications set previously. 

Then, the loop shaping stage was performed in such a way that the nominal loop-transfer 
function Lq(jcv) = C(jcv)Pq(jcv) was adjusted to make the templates fulfil the bounds 
calculated in the previous phase. Figure 20fr shows the design of Lq where the bounds axe 
fulfilled at each design frequency. This figure shows the optimal controller using QFT to 
lie on the boundaries at each frequency design. However, a simpler controller fulfilling the 
specifications was preferred for practical reasons. The resulting controller was the following: 



C(s) 



27.25(s + 1) 



(19) 



To conclude the design process, the prefilter F is determined so that the closed-loop transfer 
function matches the robust tracking specifications, that is, the closed-loop system variations 
must be inside of a desired tolerance range: 



F(s) 



0.1786s + 1 



(20) 



416 



Robust Control, Theory and Applications 



Once the robust design was performed, the system was validated by simulation. Figure 21 
shows the validation results where the specifications are clearly satisfied for the whole family 
of plants described by Eq. (17) for the time domain and frequency domain, respectively 





(a) Time domain 



(b) Frequency domain 



Fig. 21. Validation for the QFT design of the pressure system 

If the results shown in Figure 19b are compared with those shown in Figure 21b, a considerable 
uncertainty reduction can be appreciated, especially in the gain system. Notice that Figure 19 
shows the responses of the open-loop system against step inputs for the time and frequency 
domains, respectively. From these figures, the system uncertainties can be observed by 
deviations in the static gain and in the time constant of the system, such as described in 
equation (17). 

Finally, the proposed control scheme was tested on the spraying system. The robust control 
system is characterized by the ability of the closed-loop system to reach desired specifications 
satisfactorily despite of large variations in the (open-loop) plant dynamics. As commented 
above, in the pressure system presented in this work such variations appear along the different 
operating points of the process. Therefore, the system was initially tested through a group 
of different steps in order to verify that the control system fulfills the robust specifications. 
Figure 22 shows the results for a sequence of typical steps. It can be observed that the system 
faithfully follows the proposed reference, reaching the same performance for the different 
operating points. 

4.2 In solar energy field: ACUREX 

This section presents a robust control scheme for a distributed solar collector (DSC) field. 
As DSC are systems subjected to strong disturbances (mainly in solar radiation and inlet 
oil temperature), a series feedforward was used as a part of the plant, so that the system 
to be controlled has one input (fluid flow) and one output (outlet temperature) as the 
disturbances are partially compensated by the series feedforward term, so that the nonlinear 
plant is transformed into an uncertain linear system. The QFT technique (QFT) was used to 
design a control structure that guarantee desired control specifications, as settling time and 
maximum overshoot, under different operating conditions despite system uncertainties and 
disturbances (Cirre et al., 2010). 

The main difference between a conventional power plant and a solar plant is that the 
primary energy source, while being variable, cannot be manipulated. The objective of the 



A Frequency Domain Quantitative Technique for Robust Control System Design 



417 



14 



cc 



12 



3 10 

(/> 

£ 8 



- - - Set-point 
^— ^— Pressure 






50 100 150 

Time (s) 



200 



250 




150 

Time (s) 



Fig. 22. Experimental tests for the spraying system 

control system in a distributed solar collector field (DCS) is to maintain the outlet oil 
temperature of the loop at a desired level in spite of disturbances such as changes in the 
solar irradiance level (caused by clouds), mirror reflectivity, or inlet oil temperature. The 
means available for achieving this is via the adjustment of the fluid flow and the daily solar 
power cycle characteristics are such that the oil flow has to change substantially during 
operation. This leads to significant variations in the dynamic characteristics of the field, which 
cause difficulties in obtaining adequate performance over the operating range with a fixed 
parameter controller (Camacho et al., 1997; 2007a;b). For that reason, this section summarizes 
a work developed by the authors where a robust PID controller is designed to control the 
outlet oil temperature of a DSC loop using the QFT technique. 

In this work, the ACUREX thermosolar plant was used, which is located at the Plataforma 
Solar de Almeria (PSA), a research centre of the Spanish Centro de Investigaciones Energeticas 
Medioambientales y Tecnologicas (CIEMAT), in Almeria, Spain. The plant is schematically 
composed of a distributed collector field, a recirculation pump, a storage tank and a 
three-way valve, as shown in Figures 23 and 24. The distributed collector field consists of 480 
east-west-aligned single-axis-tracking parabolic trough collectors, with a total mirror aperture 
area of 2672 m2, arranged in 20 rows forming 10 parallel loops (see Figure 23). The parabolic 
mirrors in each collector concentrate the solar irradiation on an absorber tube through which 
Santotherm 55 heat transfer oil is flowing. For the collector to concentrate sunlight on its focus, 
the direct solar radiation must be perpendicular to the mirror plane. Therefore, a sun-tracking 



418 



Robust Control, Theory and Applications 



algorithm causes the mirrors to revolve around an axis parallel to the tube. Oil is recirculated 
through the field by a pump that under nominal conditions supplies the field at a flow rate 
of between 2 1/s (in some applications 3 1/s) and 12 1/s. As it passes through the field, the 
oil is heated and then the hot oil enters a thermocline storage tank, as shown in Figure 24. A 
complete detailed description of the ACUREX plant can be found in (Camacho et al., 1997). 




Fig. 23. ACUREX solar plant 



OH. THERMAL 
STORAGE TANK 



&^ 



SOLAR COLLECTOR 
MKI.I) 




Fig. 24. Simplified layout of the ACUREX plant 

As described in (Camacho et al., 1997), DSC dynamics can be approximated by low-order 
linear descriptions of the plant (as is usually done in the process industry) to model the 
system around different operating conditions and to design diverse control strategies without 
accounting for system resonances (Alvarez et al., 2007; Camacho et al., 1997). Thus, different 
low-order models are found for different operating points mainly due to fluid velocity and 
system disturbances. Using the series feedforward controller (presented in (Camacho et al., 
1997) and improved in (Roca et al., 2008)), a nonlinear plant subjected to disturbances is 
treated as an uncertain linear plant with only one input (the reference temperature to the 
feedforward controller, T r ff)- 

After performing an analysis of the frequency response (Berenguel et al., 1994), it was 
observed that the characteristics of the system (time constants, gains, resonance modes, 
...) depend on the fluid flow rate as expected (Alvarez et al., 2007; Camacho et al., 1997). 
Therefore, in order to control the system with a fixed-parameter controller, the following 
model has been used 

P = \P(s) = n }^ n ne~ XdS ■ I = 0.8, (21) 

T d = 39s, cv n e [0.0038, 0.014]rad/s, k G [0.7,1.05]}, 

where the chosen nominal plant is Pq(s) with co n = 0.014 rad/s and k = 0.7. 



A Frequency Domain Quantitative Technique for Robust Control System Design 



419 



Thus, once the uncertain model has been obtained, the specifications were determined on time 
domain and translated into the frequency domain for the QFT design. In this case, the tracking 
and stability specifications were established (Horowitz, 1993). For tracking specifications 
only is necessary to impose the minimum and maximum values for the magnitude of the 
closed-loop transfer function from the reference input to the output in all frequencies. With 
respect to the stability specification, the desired gain (GM) and phase (PM) margins are set. 
The tracking specifications were required to fulfill a settling time between 5 and 35 minutes 
and an overshoot less than 30% after 10-20°C setpoint changes for all operating conditions 
(realistic specifications, see (Camacho et al., 2007a;b)). 

For stability specification, A = 3.77 in Eq. (4) is selected in order to guarantee at least a phase 
margin of 35 degrees for all operating conditions. 

To design the compensator C(s), the tracking specifications in Eq. (13), shown in Table 1 for 
each frequency in the set of design frequencies Q, are used 

Table 1. Tracking specifications for the C compensator design 



co (rad/s) 


0.0006 


0.001 


0.003 


0.01 


5{co) 


0.55 


1.50 


9.01 


19.25 



The resulting compensator C(s), synthesized in order to achieve the stability specifications 
and the tracking specifications previously indicated, is the following PID-type controller 

(22) 



C(s)=0 - 75 v 1 + i8^ +40S 

which represents the resulting loop shaping in Figure 25. 

Then, in order to satisfy the tracking specifications, the prefilter F(s) must be designed, where 

the synthesized prefilter is given by 

F ^ = OT (23) 



ir 

| -10 

-?z 

-?.z 



^_ — — — ^^_^ y^ / ^Vf j 


.-. :. IV 



■250 -200 -1SO 
Phase (degrees) 



Fig. 25. Tracking and stability boundaries with the designed Lq(joj) 

Figure 26 shows that the tracking specifications are fulfilled for all uncertain cases. Note that 
the different appearance of Bode diagrams in closed loop for five operating conditions is due 
to the changing root locus of L(s) when the PID is introduced. 



420 



Robust Control, Theory and Applications 




Fraquaney (radys*c> 

Fig. 26. Tracking specifications (dashed-dotted) and magnitude Bode diagram of some closed 
loop transfer functions 

In order to prove the fulfillment of the tracking and stability specifications of the control 
structure, experiments were performed under several operating points and under different 
conditions of disturbances (Cirre et al., 2010), although only representative results are shown 
in this work. 

Figure 27 shows an experiment with the robust controller. At the beginning of the experiment, 
the flow is saturated until the outlet temperature is higher than the inlet one (the normal 



r 3QQ 




-f— ] r— ! r- 

9,5 10,0 10,5 11,0 11,5 12,0 12,5 13,0 13,5 14 : 14 : 5 15 f 15,5 16,0 
Local timefh) 



Fig. 27. QTF control results for the ACUREX plant (24/03/2009) (Cirre et al, 2010) 

situation during the operation). This situation always appears due to the oil resident inside 
the pipes is cooler than the oil from the tank. Once the oil is mixed in the pipes, the outlet 



A Frequency Domain Quantitative Technique for Robust Control System Design 421 

temperature reaches a higher temperature than the inlet one. During the start up, steps in the 
reference temperature are made until reaching the nominal operating point. The overshoot 
at the end of this phase is 18 °C approximately, and thus the specifications are fulfilled. 
Analyzing the time responses, a settling time between 11 and 15 minutes is observed at the 
different operating points. Therefore, both time specifications, overshoot and settling time are 
properly fulfilled. Disturbances in the inlet temperature (from the beginning until t = 12.0 h), 
due to the temperature variation of the stratified oil inside the tank, are observed during this 
experiment and correctly rejected by the feedforward action (Cirre et al., 2010). 

5. Conclusions 

This chapter has introduced the Quantitative Feedback Theory as a robust control 
technique based on the frequency domain. QFT is a powerful tool which allows to design 
robust controllers considering the plant uncertainty, disturbances, noise and the desired 
specifications. It is very versatile tool and has been used in multiple control problems 
including linear (Horowitz, 1963), non-linear (Moreno et al., 2010), (Moreno et al., 2003), 
(Moreno, 2003), MIMO (Horowitz, 1979) and non-minimum phase (Horowitz and Sidi, 1978). 
After describing the theoretical aspects, the most well-known software tools to work with QFT 
have been described using simple examples. Then, results from two experimental applications 
were presented, where QFT were successfully used to compensate for the uncertainties in the 
processes. 

6. References 

J.D. Alvarez, L. Yebra, and M. Berenguel. Repetitive control of tubular heat exchangers. Journal 

of Process Control, 17:689-701, 2007. 
M. Berenguel, E.F. Camacho, and F.R. Rubio. Simulation software package for the acurex field. 

Technical report, Dep. Ingenieria de Sistemas y Automatica, University of Seville 

(Spain), 1994. www.esi2.us.es/ rubio/ libro2.html. 
C. Borghesani, Y. Chait, and O. Yaniv. The QFT Frequency Domain Control Design Toolbox. 

Terasoft, Inc., http://www.terasoft.com/qft/QFTManual.pdf, 2003. 
E.F. Camacho, M. Berenguel, and F.R. Rubio. Advanced Control of Solar Plants (1st edn). Springer, 

London, 1997. 
E.F. Camacho, F.R. Rubio, M. Berenguel, and L. Valenzuela. A survey on control schemes for 

distributed solar collector fields, part i: modeling and basic control approaches. Solar 

Energy, 81:1240-1251, 2007a. 
E.F. Camacho, F.R. Rubio, M. Berenguel, and L. Valenzuela. A survey on control schemes for 

distributed solar collector fields, part ii: advances control approaches. Solar Energy, 

81:1252-1272, 2007b. 
M.C. Cirre, J.C. Moreno, M. Berenguel, and J.L. Guzman. Robust control of solar plants with 

distributed collectors. In IFAC International Symposium on Dynamics and Control of 

Process Systems, DYCOPS, Leuven, Belgium, 2010. 
J. M. Diaz, S. Dormido, and J. Aranda. Interactive computer-aided control design using 

quantitative feedback theory: The problem of vertical movement stabilization on a 

high-speed ferry. International Journal of Control, 78:813-825, 2005a. 
J. M. Diaz, S. Dormido, and J. Aranda. SISO-QFTIT An interactive software tool 

for the design of robust controllers using the QFT methodology. UNED, 

http://ctb.dia.uned.es/asig/qftit/, 2005b. 



422 Robust Control, Theory and Applications 

J.L. Guzman, Rodriguez R, Sanchez-Hermosilla J., and M. Berenguel. Robust pressure control 

in a mobile robot for spraying tasks. Transactions of the AS ABE, 5 1(2): 71 5-727, 2008. 
I. Horowitz. Synthesis of Feedback Systems. Academic Press, New York, 1963. 
I. Horowitz. Quantitative feedback theory. IEEE Proc, 129 (D-6):215-226, 1982. 
I. Horowitz and M. Sidi. Synthesis of feedback systems with large plant ignorance for 

prescribed time-domain tolerances. International Journal of Control, 16 (2):287-309, 

1972. 
I. M. Horowitz. Quantitative Feedback Design Theory (QFT). QFT Publications, Boulder, 

Colorado, 1993. 
I.M. Horowitz. Quantitative synthesis of uncertain multiple input-output feedback systems. 

International Journal of Control, 30:81-106, 1979. 
I.M. Horowitz and M. Sidi. Optimum synthesis of non-minimum phase systems with plant 

uncertainty. International Journal of Control, 27(3):361-386, 1978. 
K. R. Krishnan and A. Cruickshanks. Frequency domain design of feedback systems for 

specified insensitivity of time-domain response to parameter variations. International 

Journal of Control, 25 (4):609-620, 1977. 
M. Morari and E. Zafiriou. Robust Process Control. Prentice Hall, 1989. 
J. C. Moreno. Robust control techniques for systems with input constrains, (in Spanish, Control 

Robusto de Sistemas con Restricciones a la Entrada). PhD thesis, University of Murcia, 

Spain (Universidad de Murcia, Espaha), 2003. 
J. C. Moreno, A. Banos, and M. Berenguel. A synthesis theory for uncertain linear systems 

with saturation. In Proceedings of the 4th IFAC Symposium on Robust Control Design, 

Milan, Italy, 2003. 
J. C. Moreno, A. Bahos, and M. Berenguel. Improvements on the computation of boundaries 

in qft. International Journal of Robust and Nonlinear Control, 16(12) -.575-597, May 2006. 
J. C. Moreno, A. Bahos, and M. Berenguel. A qft framework for anti-windup control systems 

design. Journal of Dynamic Systems, Measurement and Control, 132(021012):15 pages, 

2010. 
Y. Piguet. Sysquake 3 User Manual. Calerga Sari, Lausanne, Switzerland, 2004. 
C. J. Pritchard and B. Wigdorowitz. Mapping frequency response bounds to the time domain. 

International Journal of Control, 64 (2):335-343, 1996. 
C. J. Pritchard and B. Wigdorowitz. Improved method of determining time-domain transient 

performance bounds from frequency response uncertainty regions. International 

Journal of Control, 66 (2):311-327, 1997. 
L. Roca, M. Berenguel, L.J. Yebra, and D. Alarcon. Solar field control for desalination plants. 

Solar Energy, 82:772-786, 2008. 
A. Sanchez-Gimeno, Sanchez-Hermosilla J., Rodriguez E, M. Berenguel, and J.L. Guzman. 

Self-propelled vehicle for agricultural tasks in greenhouses. In World Congress - 

Agricultural Engineering for a better world, Bonn, Germany, 2006. 
J. Sanchez-Hermosilla, Rodriguez E, Gonzalez R., J.L. Guzman, and M. Berenguel. A 

mechatronic description of an autonomous mobile robot for agricultural tasks in 

greenhouses. In Alejandra Barrera, editor, Mobile Robots Navigation, pages 583-608. 

In-Tech, 2010. ISBN 978-953-307-076-6. 
O. Yaniv, Quantitative Feedback Design of Linear and Nonlinear Control Systems. Kluwer 

Academic Publishers, 1999. 



18 



Consensuability Conditions of Multi Agent 

Systems with Varying Interconnection Topology 

and Different Kinds of Node Dynamics 

Dr. Sabato Manfredi 

Faculty of Engineering, University of Naples Federico II, Via Claudio 21, Napoli 80120. 

Italy 



1. Introduction 

Many systems in nature and of practical interest can be modeled as large collections 
of interacting subsystems. Such systems are referred as "Multi Agent Systems" (briefly 
MASs) and some examples include electrical power distribution networks (P. Kundur, 
1994), communication (F. Paganini, 2001), and collections of vehicles traveling in formation 
(J.K. Hedrick et al., 1990). Several practical issues concern the design of decentralized 
controllers and the stability analysis of MASs in the presence of uncertainties in the subsystem 
interconnection topology (i.e. due in practical applications to failures of transmission lines). 
The analysis and control of collections of interconnected systems have been widely studied 
in the literature. Early work on stability analysis and decentralized control of large-scale 
interconnected systems is found in (D. Limebeer & Y.S. Hung, 1983; A. Michel & R. Miller, 
1977; P.J. Moylan & D.J. Hill, 1978; Siljak, 1978; J.C. Willems, 1976). Some of the more widely 
notable stability criteria are based on the passivity conditions (M. Vidyasagar, 1977) and on 
the well-known notion of connective stability introduced in (Siljak, 1978). 
More recently, MASs have appeared broadly in several applications including formation 
flight, sensor networks, swarms, collective behavior of flocks (Savkin, 2004; C.C. Cheaha et al., 
2009; W. Ren, 2009) motivating the recent significative attention of the scientific community to 
distributed control and consensus problems (i.e. (R.O. Saber & R. Murray, 2004; Z. Lin et al., 
2004; V. Blondel et al., 2005; J. N. Tsitsiklis et al., 1986)). One common feature of the consensus 
algorithm is to allow every agent automatically converge to a common consensus state using 
only local information received from its neighboring agents. "Consensusability" of MASs is a 
fundamental problem concerning with the existence conditions of the consensus state and 
it is of great importance in both theoretical and practical features of cooperative protocol 
(i.e. flocking, rendezvous problem, robot coordination). Results about consensuability of 
MASs are related to first and second order systems and are based on the assumption of 
jointly-connected interaction graphs (i.e. in (R.O. Saber & R. Murray, 2004; J. N. Tsitsiklis 
et al., 1986)). Extension to more general linear MASs whose agents are described by 
LTI (Linear Time Invariant) systems can be found in (Tuna, 2008) where the closed-loop 
MASs were shown to be asymptotic consensus stable if the topology had a spanning 
tree. In (L. Scardovi & R. Sepulchre, 2009) it is investigated the synchronization of a 



424 Robust Control, Theory and Applications 

network of identical linear state-space models under a possibly time-varying and directed 
interconnection structure. Many investigations are carried out when the dynamic structure is 
fixed and the communication topology is time varying (i.e. in (R.O. Saber & R. Murray, 2004; 
W. Ren & R. W. Beard, 2005; Ya Zhanga & Yu-Ping Tian, 2009)). One of main appealing field 
of research is the investigation of the MASs consensusability under both the dynamic agent 
structure and communication topology variations. In particular, it is worth analyzing the joint 
impact of the agent dynamic and the communication topology on the MASs consensusability. 
The aim of the chapter is to give consensusability conditions of LTI MASs as function of the 
agent dynamic structure, communication topology and coupling strength parameters. The 
theoretical results are derived by transferring the consensusability problem into the robust 
stability analysis of LTI-MASs. Differently from the existing works, here the consensuability 
conditions are given in terms of the adjacency matrix rather than Laplacian matrix. Moreover, 
it is shown that the interplay among consensusability, node dynamic and topology must be 
taken into account for MASs stabilization: specifically, consensuability of MASs is assessed 
for all topologies, dynamic and coupling strength satisfying a pre-specified bound. From 
the practical point of view the consensuability conditions can be used for both the analysis 
and planning of MASs protocols to guarantee robust stability for a wide range of possible 
interconnection topologies, coupling strength and node dynamics. Also, the number of 
subsystems affecting the overall system stability is taken into account as it is analyzed the 
robustness of multi agent systems if the number of subsystems changes. Finally, simulation 
examples are given to illustrate the theoretical analysis. 

2. Problem statement 

We consider a network composed of linear systems interconnected by a specific topological 
structure. The dynamical system at each node is of m-th order and described by the matrices 
(A, B, C).Let G(V, E, U) be a directed weighted graph (digraph) with the set of nodes V = l..ft, 
set of edges E C n x n, and the associated weighted adjacency matrix U = {ujj} with Ujj > 
if there is a directed edge of weight Uu from vertex j (node parent) into vertex i (node child). 
The linear systems are interconnected by a directed weighted graph G(V, E, 17). Each node 
dynamical is described by: 

±i(t) = Axi(t) + Bvi(t) 

Vi {t) = Cxiit) (1) 

with V((t) is the input to the i-th node of the form 

*i(0 = £>#/('). (2) 

In this way, each node dynamic is influenced by the sum of its neighbors' outputs. This yields 
to the MAS network equation: 

n 
±i(t) = Ax{(t) + £ uijBCxj(t) (3) 

7=1 

with 1 < i < n, and its compact form: 

x(t) = Agx{t) (4) 



Consensuability Conditions of Multi Agent 

Systems with Varying Interconnection Topology and Different Kinds of Node Dynamics 425 

with Ag = (I n A) + (U BC), with denotes the matrix Kronecker product. Notice that 
the above equation can be associated to the main model used in the literature for describing 
the synchronization phenomena, energy distribution, tanks network (e.g. in (R. Cogill & S. 
Lall, 2004)). Moreover the system at each node can be MIMO or SISO type, and the matrix 
product BC takes into account the coupling strength and the coupling interaction among the 
state system variables. Observing the MAS model (3) we point out as the overall network 
dynamic is affected by the node system dynamic matrix A, the coupling matrix BC, and by 
the adjacency matrix U of the topological structure. 

Consider a network with n agents whose topology information exchange is described by 
a graph G(V, E, U) and let X( the state of agent-node i-th, consensus corresponds to the 
network condition such that the state of the agents as a whole asymptotically converges to an 
equilibrium state with identical elements (i.e. Xj = Xj for all (i,j) € n x n). The common value 
x is named consensus value. Consensusability of MASs is a fundamental problem concerning 
with the conditions for assessing network consensus equilibrium. Under the assumption 
of the existence of a network equilibrium, then consensuability deals with the research of 
analytical conditions such that the network equilibrium corresponds to a consensus state. 
In this way, without loss of generality, the consensuability problem can be reduced to the 
problem of assessing stabilization conditions of the MAS network (3) with respect to the 
equilibrium point (i.e. x\ = Xj = for all (i,j) G n x n). 
Hence, we are interested in solving the following problem: 

Problem Given a multi agent network described by (3), to determinate the MAS 
consensuability conditions as function of node dynamic, topology and coupling strength. 

Specifically, consensuability of MASs is assessed for all topologies, dynamic and coupling 

strength satisfying a pre-specified bound. 

In the follows we will present analytical conditions for solving the above Problem. 

3. Conditions for MASs consensuability 

Before of presenting the MASs consensuability conditions of (3), we have to recast the 
eigenvalues set cr(Ag) of MAS network dynamic matrix Ag. 

Lemma 1 Let c((i)={^ z } the eigenvalues set of the adjacency matrix U, cr(A g ) the eigenvalues 
set of the MAS dynamical matrix Ag, then results: (r(Ag) = |J Z - o~( A + f/ z BC) for all 1 < i < n. 

Proof Let / the Jordan canonical form of U, then it exists a similarity matrix S so that / = 
S~ 1 US. Hence S I n is a similarity matrix for the matrices I m A + U BC and I m A + 
/ BC. From the Kronecker product (Horn R.A. & Johnson C.R., 1995) results: 

(S l n )~ l {lm A + U BC)(S I n ) = 
(S" 1 I n ) (I m A + U BC) (S I n ) = 

(I m A) + (S^US BC) = (I m A) + (/ BC) 

with / being an upper triangular matrix with I m A + / BC as upper triangular block 
matrix. Hence the eigenvalues of the matrix I m A + / BC are the union of the eigenvalues 
of the block matrix on the diagonal. 



426 Robust Control, Theory and Applications 

From the above Lemma 1, the eigenvalues of the MAS dynamic matrix Ag are explicitly 
function of those of the matrix A + jijBC, for all i. So we can decouple the effects of topology 
structure (by ]i{), the coupling strength BC and node dynamic A on the overall stability of the 
MAS. This can be used for giving stability MAS condition as function of topology structure, 
node dynamic and coupling strength as shown by the following Theorem 1: 

Theorem 1 Let the MAS composed of n identical MIMO system of order m-th and 
interconnected by the digraph G = (y,E,U) with adjacency matrix U, with eigenvalues 
Wi < V-2 < • • • Pn- If the node dynamic matrix A = {a^} and the coupling matrix BC = {c z y} 
fulfill the conditions: 

*ii + Hk c ii < ( 5 ) 

V* = 1, 2, ..., m and VA: = 1, 2, ...., ft, then the MAS (3) is stable. 
Proof If the conditions (5) hold, then all eigenvalues of the matrix 



A + ]i k BC 



I an + fi k c n a 12 + \iyc\i • • • a \m + fVin \ 

«21 + w fc c 21 #22 + ^fcC 2 2 ... «2m + fV 2m 



\ # m i + fl] c C m i a m 2 + }i-k c m2 • • • %m + }^k c mm / 



\/k = 1, 2, . . . , n, are located in a convex set in the left complex half plane as result by the 
application of the Gershgorin's circle theorem (Horn R.A. & Johnson C.R., 1995). Hence, by 
Lemma 1, the MAS is stable. 

The previous Theorem 1 easily yields to the following corollaries. 

Corollary 1 Let the MAS composed of n identical MIMO system of order 2 and 
interconnected by the digraph G = (V,E,U), with adjacency matrix U with eigenvalues 
^1 < ^2 < • • • V-n- If the node dynamic matrix A = {««} and the coupling matrix BC = {c«} 
with c« > 0, i,j = 1,2, fulfill the conditions: 

"ij ^ ~ c m ( 6 ) 

an < -aij - (cu + Cfj) ■ ]i n 



a U S CiiJJ-n 

a {] < -Cijjin (7) 

an < a^ + (en - cu) • p lt 



i,j = 1, 2, then the MAS (3) is stable. 



Consensuability Conditions of Multi Agent 

Systems with Varying Interconnection Topology and Different Kinds of Node Dynamics 427 

Because the adjacency matrix U of a graph has both positive and negative eigenvalues, the 
conditions (6) and (7) implicitly imply the assumption that the single system at the node is 
stable. In this way, as expected, we derive that it is not possible to stabilize a network of 
instable systems by acting only on the topological structure. Given a specified node dynamic, 
coupling strength and bound on the adjacency matrix U, by conditions (6) and (7) we 
can assess MAS stability. Moreover, the MAS robustness with respect to varying switching 
topology can be dealt by considering the span of the eigenvalue of the admissible structure 
topologies. As we will show in the follows, it is possible easily to evaluate the eigenvalues of 
Ag, given the eigenvalues of U in some simple and representative cases of interest. 

Corollary 2 Let the MAS composed of n identical MIMO system of order 1 and 
interconnected by the digraph G = (V,E,U), with adjacency matrix U with eigenvalues 
1*1 < V-2 < • • • 1*n- If the node dynamic matrix A — a and the coupling matrix BC = c fulfill 
the conditions: 



a < -c-]i n if c> (8) 

a < -c-f*i if c < 0, (9) 



then the MAS (3) is stable. 

The Corollary 2 reduces the analytical result of Theorem 1 to the case of the consensus of 
integrator (R.O. Saber & R. Murray, 2004) with coupling gain c. Smaller c, higher is the degree 
of robustness of the network to the slower node dynamic. In the opposite, higher c reduces the 
stability margin of the MAS. Finally, for a fixed dynamic at the node, the maximum admissible 
coupling strength c depends on the maximum and minimum eigenvalues of the adjacency 
matrix: 

c < -— if c>0 (10) 

fin 
C > -— if C<0. (11) 

n 

Corollary 3 Let the MAS of n identical MIMO system of the m-th order, interconnected by the 
digraph G = (V,E, U), with adjacency matrix U with eigenvalues ]i\ < \ii < . . . ]i n . If the 
node dynamic matrix A = {fl;y} and the coupling matrix BC = {c«} are both upper or lower 
triangular matrix and fulfill the conditions: 



0» < — c ii ' J*n if Cjj > (12) 

*« < ~ c ii ' Hi if c ii < 0/ ( 13 ) 

then the MAS (3) is stable. 



428 



Robust Control, Theory and Applications 





(a) 



(b) 




(c) 

Fig. 1. Procedure of redirectioning of links in a regular network (a) with increasing probability p. As p 
increases the network moves from regular (a) to random (c), becoming small world (b) for a critical value 
of p. n=20, k=4 

Notice that if BC = c • l n , (10) and (11) become: 



c< 



c > 



mm \ai t 

i 
]in 
min |flj, 

__z 

Ml 



If Ctf > 



If ca < 



(14) 
(15) 



and hence the stability of MAS is explicitly given as function of the network slowest node 
dynamic. 

Now we would like to point out the case of undirected topology with symmetric adjacency 
matrix 17. If we assume A and BC being symmetric, then Ag is symmetric with real eigenvalue. 
Moreover from the field value property (Horn R.A. & Johnson C.R., 1995), let cr{A) = {&;} and 
a(BC) = {vj} the eigenvalues set of A and BC, then the eigenvalues of A + ]i\BC are in the 
interval [miny{#y} + f/ z miny{vy}, maxy{ay} + /^-maxy{vy}], for every l<i<n,l<j<m. 
In this way, there is a bound need to be satisfied by the topology structure, node dynamic and 
coupling matrix for MAS stabilization. 



Consensuability Conditions of Multi Agent 

Systems with Varying Interconnection Topology and Different Kinds of Node Dynamics 429 

In the literature, the MAS consensuability results have been given in terms of Laplacian 
matrix properties. Here, differently, we have given bounds as function of the adjacency 
matrix features. Anyway we can use the results on the Laplacian eigenvalue for recasting 
the bounds given on the adjacency matrix. To this aim, defined the degree d{ of i-th node of 
an undirected graph as £/ uu, the Laplacian matrix is defined as L = D — U with D is the 
diagonal matrix with the degree of node i-th in position i-th. Clearly L is a zero row sums 
matrix with non-positive off-diagonal elements. It has at least one zero eigenvalue and all 
nonzero eigenvalues have nonnegative real parts. So U = D — L and being the minimum and 
maximum Laplacian eigenvalues respectively bounded by and the highest node degree, we 
have: 

Lemma 2 Let U the adjacency matrix of undirected and connected graph G = (V,E,U), with 
eigenvalues ]i\ < \ii < . . . < ]i n , then results: 

^i(LT) > mindj — min(max{^ + d; : (k,j) <G E(G)},n) (16) 

i k,j 

]i n {U) < maxdi (17) 

i 

Proof Easily follows from the Laplacian eigenvalues bound and the field value property 
(Horn R.A. & Johnson C.R., 1995). 

4. Simulation validation 

In the follows we will present a variety of simulations to validate the above theoretical results 
under different kinds of node dynamic and network topology variations. Specifically the MAS 
topology variations have been carried out by using the well known Watts-Strogats procedure 
described in (Watts & S. H. Strogatz, 1998). In particular, starting from the regular network 
topology (p = 0), by increasing the probability p of rewiring the links, it is possible smoothly 
to change its topology into a random one (p = 1), with small world typically occurring at 
some intermediate value. In so doing neither the number of nodes nor the overall number of 
edges is changed. In Fig. 1 it shown the results in the case of MAS of 20 nodes with each one 
having k = 4 neighbors. 

Among the simulation results we focus our attention on the maximum and minimum 
eigenvalues of the matrixes U (i.e. ]i n and ]i{) and A g (i.e. Am an d A m ) and their bounds 
computed by using the results of the previous section. In particular, by Lemma 2, we convey 
the bounds on U eigenvalues in bounds on Ag eigenvalues suitable for the case of time varying 
topology structure. We assume in the simulations the matrices A and BC to be symmetric. In 
this way, if U eigenvalues are in \v\, v^\, let cr(A) = {a z }, a(BC) = {v z }, the eigenvalues of Ag 
will be in the interval [min oci + min^yy, vi v ; -}, max oi{ + maxjz^vy, v^ v ; -}] for i,j = 1, 2, . . . , n. 

i j J J i J 

Notice that, known the interval of variation \v\, v^\ of the eigenvalues set of U under switching 
topologies, we can recast the conditions (8), (9), (12), (13), (6), (7) and to use it for design 
purpose. Specifically, given the interval \o\, V2] associated to the topology possible variations, 
we derive conditions on A or BC for MAS consensuability. 

We consider a graph of n = 400 and k = 4. In the evolving network simulations, we started 
with k = 4 and bounded it to the order of O(log(n)) for setting a sparse graph. In Tab 1 are 
drawn the node dynamic and coupling matrices considered in the first set of simulations. 



430 



Robust Control, Theory and Applications 



-2 
10 



, 1 1 ■ 



10"' 



(a) 



10" 



10 u 




(b) 



-10 



-15 



-20 
10" 



• i i 
u r i« 



I * 



i* i 



10"' 



10" 



10" 



-2 

-4 

-6 

- -8 

-10 

-12 

-14 

-16 

10 



.^M— 






- 






"V" 


'A 



10"' 



10" 



10" 



(c) 



(d) 



Fig. 2. Case 1. Dashed line: bound on the eigenvalue; continuous line: eigenvalues, (a) Maximum 
eigenvalue of A g/ (b) Maximum eigenvalue of U, (c) Minimum eigenvalue of A g/ (d) Minimum 
eigenvalue of LI 



20 



15 




20 40 60 80 100 

t 



Fig. 3. Case 1: State dynamic evolution in the time 



Consensuability Conditions of Multi Agent 

Systems with Varying Interconnection Topology and Different Kinds of Node Dynamics 



431 





(a) 



(b) 






-2 




- 




-4 


i 






6 


i 








v » _*; 


•*'%« -^ 


.,_ 


-8 




* .»- 1 *V ! , 


=5. 






v :::V»u. 




-10 








-12 




. t 




-14 







10 ' 



10 



10 



10 



(c) 



(d) 



Fig. 4. Case 2. Dashed line: bound on the eigenvalue; continuous line: eigenvalue: (a) Maximum 
eigenvalue of Ag, (b) Maximum eigenvalue of U, (c) Minimum eigenvalue of A g , (d) Minimum 
eigenvalue of U 





A 


B 


C 


Case 1: 


-4.1 


1 


1 


Case 2: 


-12 


1 


1 


Case 3: 


-6 


1 


1 


Case 4: 


-6 


2 


1 



Table 1. Node system matrices (A,B,C) 



432 



Robust Control, Theory and Applications 





(a) 



(b) 



-10 



E -15 



-20 



10"* 



• in • 



10"' 



(c) 









10 



10 u 



10" v 







— ^ > LI 


















-5 


1 

■ " " " V" **' " 


:-'» : 




=r 


-10 
-15 


::::::::* 


>A w\ 


4 
i* 

ft* 



10 10" 

p 

(d) 



10° 



Fig. 5. Case 3. Dashed line: bound on the eigenvalue; continuous line: eigenvalue: (a) Maximum 
eigenvalue of A g/ (b) Maximum eigenvalue of U, (c) Minimum eigenvalue of A g/ (d) Minimum 
eigenvalue of U 




8 10 



Fig. 6. Case 3: state dynamic evolution in the time 



Consensuability Conditions of Multi Agent 

Systems with Varying Interconnection Topology and Different Kinds of Node Dynamics 



433 









m 






10"' 



10" 



10 u 



if 






10"' 



10" 



10" 



(a) 



(b) 



-10 
-15 
-20 
^ E -25 
-30 
-35 



-40 
10 



- W I,, 



10"' 



10" 



10 u 



U t 



(c) 



(d) 



Fig. 7. Case 4. Dashed line: bound on the eigenvalues; continuous line: eigenvalues: (a) Maximum 
eigenvalue of A g/ (b) Maximum eigenvalue of U, (c) Minimum eigenvalue of A g/ (d) Minimum 
eigenvalue of U 

In the case 1 (Fig 2), we note as although we start from a stable MAS network, the topology 

variation leads the network instability condition (namely Am becomes positive). In Fig. 3 it is 

shown the time state evolution of the firsts 10 nodes, under the switching frequency of 1 Hz. 

We note as the MAS converges to the consensus state till it is stable, then goes in instability 

condition. 

In the case 2, we consider a node dynamic faster than the maximum network degree dyi of 

all evolving network topologies from compete to random graph. Notice that although this 

assures MAS consensuability as drawn in Fig. 4, it can be much conservative. 

In the case 3 (Fig 5), we consider a slower node dynamic than the cases 2. The MAS is robust 

stable under topology variations. In Fig. 6 the state dynamic evolution is convergent and the 

settling time is about 4.6/ 1 Am (^g)l- 

Then we have varied the value for BC by doubling the B matrix value leaving unchanged the 

node dynamic matrix. As appears in Fig. 7, the MAS goes in instability condition pointing out 



434 



Robust Control, Theory and Applications 









»•'?» 


1 . ■ 
p* f -» -;»ni 













I 


10 






1 

V 

ll'l 


8 






J«#l| , ::; 






J 5 


• .v 






-ir 


"% 


b 


:: S ; 






4 


:l:::::: 


. »» » ( ■»- 











10"' 



10" 



10 u 



10"' 



10"' 



10" 



10" 



(a) 



(b) 



-10 

-15 

-20 

-25 

-30 
10 









V 
I 



10 10" 

p 

(c) 



10 u 



-10 



-15 



10" v 



r 'i\ >\ i 



10 10" 

p 

(d) 



1(T 



Fig. 8. Case 5. Dashed line: bound on the eigenvalue; continuous line: eigenvalue: (a) Maximum 
eigenvalue of A g/ (b) Maximum eigenvalue of U, (c) Minimum eigenvalue of A g/ (d) Minimum 
eigenvalue of U 

that also the coupling strength can affect the stability (as stated by the conditions (8), (9)) and 
that this effect can be amplified by the network topological variations. 





A 


B 


C 


Case 5: 


-6 3 
3 -12 




"1" 



[10] 


Case 6: 


-3 3 
3 -6 




1 



[10] 


Case 7: 


-3 3 
3 -6 


0.25 



[10] 



Table 2. Node system matrices (A,B,C). 



Consensuability Conditions of Multi Agent 

Systems with Varying Interconnection Topology and Different Kinds of Node Dynamics 



435 



4 

3 

2" 



1 L 
10"' 






10"' 



10" 



10° 



/ - 






10"' 



10" 



10" 



(a) 



(b) 



-8 










-10 








-12 


.- - 


:l 




~ 


-14 




I 


v '*v. 




-16 






i. 


;;;;;;;- 


-18 
-20 








"l ::•::: 

- ^ 

! 1 




-2 
-4 
-6 
-8 
-10 
-12 



10"* 



10"' 



10" 



10 u 



-14 
10 






if*'* 



10"' 



10" 



10 u 



(c) 



(d) 



Fig. 9. Case 6. Dashed line: bound on the eigenvalue; continuous line: eigenvalue: (a) Maximum 
eigenvalue of A g/ (b) Maximum eigenvalue of U, (c) Minimum eigenvalue of A g/ (d) Minimum 
eigenvalue of U 

On the other side, a reduction on BC increases the MAS stability margin. So we can tune 
the BC value in order to guarantee stability or desired robust stability MAS margin under a 
specified node dynamic and topology network variations. Indeed if BC has eigenvalues above 
1, its effect is to amplify the eigenvalues of U and we need a faster node dynamic for assessing 
MAS stability. If BC has eigenvalues less of 1, its effect is of attenuation and the node dynamic 
can be slower without affecting the network stability. 

Now we consider SISO system of second order at the node as shown in Tab.2. In this case the 
matrix BC has one zero eigenvalue being the rows linearly dependent. 

In the case 5 the eigenvalues of A are oc\ = —4.76 and #2 = —13.23, the eigenvalues of the 
coupling matrix BC are V\ = 1 and v^ = 0. In this case the node dynamic is sufficiently fast for 
guaranteeing MAS consensuability (Fig. 8). In the case 6, we reduce the node dynamic matrix 
A to oci = — 1.15 e #2 — —7.85. Fig. 9 shows instability condition for the MAS network. We 



436 



Robust Control, Theory and Applications 





1 








n- 




0.5 


-0.5 


• 

it 

i' " ' -'-'-'-- - '- 

i 


i*» ? 

_• i* ti «i 


V 











10"^ 



10"' 



10" 



10° 






._* ji ii ii 



10"' 



10" 



10" 



(a) 



(b) 





1 


1 I 



*% 




-2 
-4 
-6 
-8 
-10 
-12 



-14 
10 



:'•>": V^ 



5^. m 



10"' 



10" 



10 u 



(c) 



(d) 



Fig. 10. Case 7. Dashed line: bound on the eigenvalue; continuous line: eigenvalue, (a) Maximum 
eigenvalue of A g/ (b) Maximum eigenvalue U, (c) Minimum eigenvalue of A g/ (d) Minimum eigenvalue 
of U 

can lead the MAS in stability condition by designing the coupling matrix BC as appear by the 
case 7 and the associate Fig. 10. 

4.1 Robustness to node fault 

Now we deal with the case of node fault. We can state the following Theorem. 

Theorem 2 Let A and BC symmetric matrix and G(V, E, U) an undirected graph. If the MAS 
system described by A g is stable, it is stable also in the presence of node faults. Moreover the 
MAS dynamic becomes faster after the node fault. 

Proof Being the graph undirected and A and BC symmetric then Ag is symmetric. Let Ag the 
MAS dynamic matrix associated to the network after a node fault. Ag is obtained from Ag by 
eliminating the rows and columns corresponding to the nodes went down. So Ag is a minor of 
Ag and for the interlacing theorem (Horn R.A. & Johnson C.R., 1995) it has eigenvalues inside 



Consensuability Conditions of Multi Agent 

Systems with Varying Interconnection Topology and Different Kinds of Node Dynamics 



437 





(a) 



(b) 




:£" "3 




-3.5 



(c) 



(d) 



Fig. 11. Eigenvalues in the case 1 = 1. Dashed line: eigenvalue in the case of complete topology with 
n = 100; continuous line: eigenvalue in the case of node fault: (a) Maximum eigenvalue of A g , (b) 
Maximum eigenvalue of U, (c) Minimum eigenvalue of A g , (d) Minimum eigenvalue of U 

the real interval with extremes the minimum and maximum Ag eigenvalues. Hence if Ag is 
stable, Ag is stable too. Moreover, the maximum eigenvalue of Ag is less than one of Ag. So 
the slowest dynamic of the system x(t) = Agx(t) is faster than the system x(t) = Agx(t). 

In the follows we will show the eigenvalues of MAS dynamic in the presence of node fault. 
We consider MAS network with n = 100. We compare for each evolving network topology 
at each time simulation step, the maximum and minimum eigenvalues of Ag than those ones 
resulting with the fault of randomly chosen / nodes. Figures 11 and 12 show the eigenvalues 
of system dynamic for the cases / = 1 and I = 50. 

Notice that as the eigenvalues of U and Ag of fault network are inside the real interval 
containing the eigenvalues of U and Ag of the complete graph. In Fig. 13 are shown the time 
evolutions of state of the complete and faulted graphs. Notice that the fault network is faster 
than the initial network as stated by the analysis of the spectra of A g and A g . 



438 



Robust Control, Theory and Applications 





(a) 



(b) 




-1.5 




-2 




-2.5 


t. . 


-3 
-3.5 

-4 



10" v 



(c) 



10 10" 

p 

(d) 



1(T 



Fig. 12. Eigenvalues in the case of / = 50. Dashed line: eigenvalue in the case of complete topology with 
n = 100; continuous line: eigenvalue in the case of node fault: (a) Maximum eigenvalue of A g , (b) 
Maximum eigenvalue of U, (c) Minimum eigenvalue of A g , (d) Minimum eigenvalue of U 

5. Conclusions 

In this book chapter we have investigated the consensuability of the MASs under both the 
dynamic agent structure and communication topology variations. Specifically, it has given 
consensusability conditions of linear MASs as function of the agent dynamic structure, 
communication topology and coupling strength parameters. The theoretical results are given 
by transferring the consensusability problem to the stability analysis of LTI-MASs. Moreover, 
it is shown that the interplay among consensusability, node dynamic and topology must 
be taken into account for MASs stabilization: consensuability of MASs is assessed for 
all topologies, dynamic and coupling strength satisfying a pre-specified bound. From the 
practical point of view the consensuability conditions can be used for both the analysis 
and planning of MASs protocols to guarantee robust stability for a wide range of possible 
interconnection topologies, coupling strength and node dynamics. Also, the consensuability 



Consensuability Conditions of Multi Agent 

Systems with Varying Interconnection Topology and Different Kinds of Node Dynamics 



439 



100 




100 



50 



X 



-50 



I 








ife^ 

















Fig. 13. Time evolution of the state variables for 1=50: top Figure: complete graph. Bottom Figure: graph 
with fault. 

of MAS in the presence of node faults has been analyzed. Simulation scenarios are given to 
validate the theoretical results. 

Acknowledgement 

The author would like to thank Ms. F. Schioppa for valuable discussion. 



6. References 

J.K. Hedrick, D.H. McMahon, V.K. Narendran, and D. Swaroop. (1990). Longitudinal vehical 

controller design for WHS systems. Proceedings of the American Control Conference, 

pages 3107-3112. 
P. Kundur. (1994) Power System Stability and Control. McGraw-Hill. 
D. Limebeer and Y.S. Hung. (1983). Robust stability of interconnected systems. IEEE Trans. 

Automatic Control, pages 710-716. 
A. Michel and R. Miller. (1977). Qualitative analysis of large scale dynamical systems. Academic 

Press. 
P.J. Moylan and D.J. Hill. (1978). Stability criteria for large scale systems. IEEE Trans. Automatic 

Control, pages 143-149. 
F. Paganini, J. Doyle, and S. Low. (2001). Scalable laws for stable network congestion control. 

Proceedings of the IEEE Conference on Decision and Control, pages 185-190, 2001. 



440 Robust Control, Theory and Applications 

M. Vidyasagar. (1977). L2 stability of interconnected systems using a reformulation of the passivity 

theorem. IEEE Transactions on Circuits and Systems, 24, 637-645. 
D. uSiljak. (1978). Large-Scale Dynamic Systems. Elsevier North-Holland. 
J.C. Willems. (1976). Stability of large-scale interconnected systems.. 
Saber R.O., Murray R.M. (2004). Consensus Problems in Networks of Agents with Switching 

Topology and Time-Delays, IEEE Transactions on Automatic Control, Vol 49, 9. 
Z. Lin, M. Brouke, and B. Francis, (2004). Local control strategies for groups of mobile autonomous 

agents., Transactions on Automatic Control, 49, vol 4, pages: 622U-629. 
V. Blondel, J. M. Hendrickx, A. Olshevsky, and J. N. Tsitsiklis, (2005) Convergence in multiagent 

coordination, consensus, and flocking, 44th IEEE Conference on Decision and Control 

and European Control Conference, pages 2996U-3000. 
A. V. Savkin, (2004) Coordinated collective motion of groups of autonomous mobile robots: analysis of 

VicsekSs model., Transactions on Automatic Control, Vol 49, 6, pages: 981-U982. 
J. N. Tsitsiklis, D. P. Bertsekas, M. Athans, (1986). Distributed Asynchronous Deterministic 

and Stochastic Gradient Optimization Algorithms, Transactions on Automatic Control, 

pages. 803-U812. 

C. C. Cheaha, S. P. Houa, and J. J. E. Slotine, (2009). Region-based shape control for a swarm of 

robots., Automatica, Vol. 45, 10, pages: 2406-U2411. 
Ren, W., (2009). Collective Motion From Consensus With Cartesian Coordinate Coupling ., IEEE 

Transactions on Automatic Control. Vol. 54, 6, pages: 1330-1335. 
S. E. Tuna, (2008). LQR-based coupling gain for synchronization of linear systems, Available at: 

http://arxiv.org/abs/0801.3390 
Luca Scardovi, Rodolphe Sepulchre, (2009) Synchronization in networks of identical linear systems 

Automatica, Volume 45, Issue 11, Pages 2557-2562 
W. Ren and R. W. Beard, (2005), Consensus seeking in multiagent systems under dynamically 

changing interaction topologies, IEEE Trans. Automatic Control, vol. 50, no. 5, pp. 

655-661. 
Ya Zhanga and Yu-Ping Tian, (2009). Consentability and protocol design of multi-agent systems 

with stochastic switching topology., Automatica, Vol. 45, 5, 2009, Pages 1195-1201. 
R. Cogill, S. Lall, (2004). Topology independent controller design for networked systems, IEEE 

Conference on Decision and Control, Atlantis, Paradise Island, Bahamas, Dicembre 

2004 

D. J. Watts, S. H. Strogatz, (1998). Collective dynamics of small world networks, Nature - Macmillan 

Publishers Ltd, Vol. 393, Giugno 1998. 
Horn R.A. and Johnson C.R., (1995). Topics in Matrix Analysis Cambridge University Press 
1995. 



19 



On Stabilizability and Detectability of 
Variational Control Systems 

Bogdan Sasu*and Adina Lumini^a Sasu 

Department of Mathematics, Faculty of Mathematics and Computer Science, West 
University ofTimi§oara, V. Pdrvan Blvd. No. 4 300223 Timi§oara 

Romania 



1. Introduction 

The aim of this chapter is to present several interesting connections between the input-output 
stability properties and the stabilizability and detectability of variational control systems, 
proposing a new perspective concerning the interference of the interpolation methods in 
control theory and extending the applicability area of the input-output methods in the stability 
theory 

Indeed, let X be a Banach space, let (0, d) be a locally compact metric space and let S = X x 0. 
We denote by B(X) the Banach algebra of all bounded linear operators on X. If Y, U are two 
Banach spaces, we denote by B(U, Y) the space of all bounded linear operators from U into Y 
and by C s (0, B(U,Y)) the space of all continuous bounded mappings H : — > B(U,Y). With 
respect to the norm 1 1 \H\ \ \ := sup | |H(0) 1 |, C s (0, B(U, Y)) is a Banach space. 

If H e C S (Q,B(U,Y)) and Q e C S (Q,B(Y,Z)) we denote by QH the mapping 3 0^ 
Q(6)H(6). It is obvious that QH e C S (Q,B(U,Z)) . 

Definition 1.1. Let / 6 {IR + ,]R}. A continuous mapping a : x / — > is called a flow on 
if a(6, 0) = 6 and a(6, s + t) = a{a{6, s), t), for all (6, s,t) e x J 2 . 

Definition 1.2. A pair n = (^c) is called a linear skew-product flow on 6 = X x if a is a 

flow on and O:0x IR + — > 23 (X) satisfies the following conditions: 

(i) <&(0, 0) = J d/ the identity operator on X, for all e 0; 

(ii)<S>(0,* + s) = &(<r(0,t),s)®(6,t), for all (0,*,s) G x IR 2 . (the cocycle identity); 

(iii) (0, f ) h^ O(0, £)x is continuous, for every x £ X; 

(iv) there are M > 1 and cv > such that | |O(0, *) 1 1 < Me^, for all (0, t) G x R + . 

The mapping is called f/ze cocycle associated to the linear skew-product flow tt = (<E>,c). 

Let L / 1 oc (IR + , X) denote the linear space of all locally Bochner integrable functions u : R+ -^ X. 
Let 7i = (0,(j) be a linear skew-product flow on S = X x 0. We consider the variational 
integral system 



Jo 



(S n ) x e (t;xQ,u) = ®(0,t)x o + ®(a(6,s),t-s)u(s) ds, t>0,6e® 



* The work is supported by The National Research Council CNCSIS-UEFISCSU, PN II Research Grant 
ID 1081 code 550. 



442 Robust Control, Theory and Applications 

with u G L Z 1 0C (R+,X) and x G x - 

Definition 1.3. The system (S n ) is said to be uniformly exponentially stable if there are N, v > 
such that 

IM*;*o/0)|| < Ne- Vt \\x \\, V(0,*) g x R + ,Vx e x. 

Remark 1.4. It is easily seen that the system (S^) is uniformly exponentially stable if and only 
if there are N,v > such that ||<S>(0,*)|| < Ne~ vt , for all (0, t ) G © x R+. 

If 7i = (0,(7) is a linear skew-product flow on S = X x and P G C s (0, £>(X)), then there 
exists a unique linear skew-product flow denoted zip = (<&p,o~) on X x such that this 
satisfies the variation of constants formula: 

® P (e,t)x = ®(e,t)x+ [ ®(o-(6,s),t-s)PM6,s))®p(6,s)xds (1.1) 

Jo 

and respectively 

® P (0,t)x = ®(0,t)x+ [ <3>p(cr(6,s),t-s)PM6,s))®(e,s)xds (1.2) 

Jo 

for all (x,0,t) G £ x R+. Moreover, if M,o; are the exponential growth constants given by 
Definition 1.2 (iv) for n, then 

||<D P (6U)|| <M^ a;+M ll p ll)^ V(0,f) G0xR + . 

The perturbed linear skew-product flow zip = (Op, a) is obtained inductively (see Theorem 
2.1 in (Megan et al., 2002)) via the formula 

00 

n=0 
where 



Jo 



®o(e,t)x = ®(e,t)x and ® n (6,t)x= / ®(cr(e,s),t - s) P((r(0,s)) ® n _ 1 (6,s)xds,n > 1 

for every (x, 0) G S and £ > 0. 

Let 17, Y be two Banach spaces, let B G C S (0,H(17,X)) and C G C s (0,£(X, Y)). We consider 

the variational control system (n, B, C) described by the following integral model 

(x(6,t,x ,u) = ®(e,t)x + fQ®(cr(0,s),t - s)B(a(6,s))u(s)ds 
y(6,t,x ,u) = C(cr(6,t))x(6,t,x ,u) 

where t > 0, (x ,6) G S and u G LJ- 0C (R +/ 17). 

Two fundamental concepts related to the asymptotic behavior of the associated perturbed 
systems (see (Clark et al, 2000), (Curtain & Zwart, 1995), (Sasu & Sasu, 2004)) are described 
by stabilizability and detectability as follows: 

Definition 1.5. The system [re, B, C) is said to be: 

(i) stabilizable if there exists a mapping F G C s (0,B(X,U)) such that the system (S nBF ) is 

uniformly exponentially stable; 

(ii) detectable if there exists a mapping K G C s (0,B(Y,X)) such that the system (S nKC ) is 

uniformly exponentially stable. 



On Stabilizability and Detectability of Variational Control Systems 443 

Remark 1.6. (i) The system (n, B, C) is stabilizable if and only if there exists a mapping F £ 
C s (0, B (X, IT)) and two constants N, v > such that the perturbed linear skew-product flow 
n BF — {^ > B¥/ cr ) nas the property 

\\®bf(0, t)\\< Ne~ vt , V(0, t) e x R+; 

(ii) The system (n,B,C) is detectable if and only if there exists a mapping K G C s (0, £>(Y, X)) 
and two constants N, v > such that the perturbed linear skew-product flow n K c = (^ko cr ) 
has the property 

\\®kc(0, 1 1 < N*" 1 */ V(0, G x R+. 

In the present work we will investigate the connections between the stabilizability and 
the detectability of the variational control system (n,B,C) and the asymptotic properties 
of the variational integral system (S n ). We propose a new method based on input-output 
techniques and on the behavior of some associated operators between certain function spaces. 
We will present a distinct approach concerning the stabilizability and detectability problems 
for variational control systems, compared with those in the existent literature, working with 
several representative classes of translations invariant function spaces (see Section 2 in (Sasu, 
2008) and also (Bennet & Sharpley, 1988)) and thus we extend the applicability area, providing 
new perspectives concerning this framework. 

A special application of our main results will be the study of the connections between 
the exponential stability and the stabilizability and detectability of nonautonomous control 
systems in infinite dimensional spaces. The nonautonomous case treated in this chapter will 
include as consequences many interesting situations among which we mention the results 
obtained by Clark, Latushkin, Montgomery-Smith and Randolph (see (Clark et al., 2000)) 
and the authors (see (Sasu & Sasu, 2004)) concerning the connections between stabilizability, 
detectability and exponential stability. 

2. Preliminaries on Banach function spaces and auxiliary results 

In what follows we recall several fundamental properties of Banach function spaces and we 
introduce the main tools of our investigation. Indeed, let M (R+, R) be the linear space of all 
Lebesgue measurable functions u : R+ — > IR, identifying the functions equal a.e. 

Definition 2.1. A linear subspace B of M (IR+, IR) is called a normed function space, if there is a 

mapping | • |g : B —> R+ such that: 

(z) \u\b = if and only if u = a.e.; 

(ii) \ocu\b = \oc\ \u\b, for all (oc,u) e R x B; 

(Hi) \u + v\b < \u\b + \v\b, for all u,v € B; 

(iv) if \u(t)\ < \v(t)\ a.e. t e R+ and v G B, then u e Band \u\ B < \v\ B . 

If (B, | • |g) is complete, then B is called a Banach function space. 

Remark 2.2. If (B, | • |g) is a Banach function space and u G B then \u(-)\ G B. 

A remarkable class of Banach function spaces is represented by the translations invariant 
spaces. These spaces have a special role in the study of the asymptotic properties of the 
dynamical systems using control type techniques (see Sasu (2008), Sasu & Sasu (2004)). 



444 Robust Control, Theory and Applications 

Definition 2.3. A Banach function space (B, | • |b) is said to be invariant to translations if for 
every u : R+ — > IR and every t > 0, w G B if and only if the function 



to to z' \ ( U(s-t) ,S>t 



belongs to B and \ut\$ = \u\b- 

Let C C (R+,R) denote the linear space of all continuous functions v : R+ — > R with compact 
support contained in R + and let L* (R+,R) denote the linear space of all locally integrable 
functions u : R+ — > R. 

We denote by T(R+) the class of all Banach function spaces B which are invariant to 
translations and satisfy the following properties: 

(i)C c (R + ,R)cBcL | 1 oc (R+,R); 

(ii) if B \ L 1 (R + , R) ^ then there is a continuous function ^B\L ! (R +/ R). 

For every A C R+ we denote by xa the characteristic function of the set A. 

Remark 2.4. (i) If B G T(R+), then X[o,t) € B > for a11 t > °- 

(ii) Let B G T(R+), weB and t > 0. Then, the function u t : R+ -> R, fl f (s) = w(s + t) belongs 

to B and \ti t \ B < \u\ B (see (Sasu, 2008), Lemma 5.4). 

Definition 2.5. (i) Let u,v G A4(R+,R). We say that w and i? are equimeasurable if for every 
t > the sets {s G R+ : |w(s) | > t} and {s G R+ : |u(s) | > t} have the same measure, 
(ii) A Banach function space (B, \ • |b) is rearrangement invariant if for every equimeasurable 
functions w,i? : R+ — > R+ with w G B we have that »G6 and |w|g = |i?|g. 

We denote by 7^(R+) the class of all Banach function spaces B G T(R+) which are 
rearrangement invariant. 

A remarkable class of rearrangement invariant function spaces is represented by the so-called 
Orlicz spaces which are introduced in the following remark: 

Remark 2.6. Let cp : R+ — ► R+ be a non-decreasing left-continuous function, which is 
not identically zero on (0, oo). The Young function associated with cp is defined by Y(p(t) = 

Jo <p(s) ds. For every u G A^(R+,R) let M<p(u) := f™ Y<p{\u{s)\) ds. The set O^ of all 
u G A^(R+,R) with the property that there is k > such that Mq,(ku) < oo, is a linear 
space. With respect to the norm \u\q, := mi{k > : Mq,(u/k) < 1}, O^ is a Banach space, 
called the Orlicz space associated with cp. 

The Orlicz spaces are rearrangement invariant (see (Bennet & Sharpley, 1988), Theorem 8.9). 
Moreover, it is well known that, for every p G [1, oo], the space L^(R + , R) is a particular case 
of Orlicz space. 

Let now (X, || • ||) be a real or complex Banach space. For every B G T(R+) we denote 
by B(R+,X), the linear space of all Bochner measurable functions u : R+ — ► X with the 
property that the mapping N u : R+ — > R+, N u (t) = \\u(t)\\ lies in B. Endowed with the norm 
IMIb(jr +/ x) : = |N W | B ,B(R + ,X) is a Banach space. 

Let (0,d) be a metric space and let S = X x 0. Let n = (0,(7") be a linear skew-product flow 
on 8 = X x 0. We consider the variational integral system 

(S n ) x (t;xo,u) = ®(O,t)xo+ [ ®(cr(6,s),t-s)u(s)ds, £>O,0G0 

Jo 



On Stabilizability and Detectability of Variational Control Systems 445 

with u G L Z 1 0C (R +/ X) and x € x - 

An important stability concept related with the asymptotic behavior of dynamical systems is 

described by the following concept: 

Definition 2.7. Let W G T(R+). The system (S n ) is said to be completely (W(R+,X), 

W(R+, X))-stable if the following assertions hold: 

(i) for every u G W(R+, X) and every G the solution x (• ; 0, u) G W(R+, X); 

(ii) there is A > such that ||x0(-;O,tt)|| W ( R+X ) < A||w|| W ( R+X ), for all (u,9) G W(R+,X) x 

0. 

A characterization of uniform exponential stability of variational systems in terms of the 
complete stability of a pair of function spaces has been obtained in (Sasu, 2008) (see Corollary 
3.19) and this is given by: 

Theorem 2.8. Let W G 7£(R+). The system (S n ) is uniformly exponentially stable if and only if 
(S n ) is completely (W(R+,X), W(R+,X)) -stable. 

The problem can be also treated in the setting of the continuous functions. Indeed, let 
Q>(R+,R) be the space of all bounded continuous functions u : R+ —> R. Let Cq(R+,R) 
be the space of all continuous functions u : R+ — > R with lim u(t) = and let Cqo(R+,R) := 

{u G C (R+,R) : u(0) = 0}. 

Definition 2.9. Let V G {C fo (R + ,R),C (R+,R),Coo(R+,R)}. The system (S n ) is said to be 
completely (V(R+, X), V r (R+ / X))-stable if the following assertions hold: 
(i) for every w G V r (R+,X) and every 6 G the solution xg(- ;Q,u) G y(R +/ X); 
(ii)thereisA > such that ||^(-;0,w)|| y(]R+/X) < A||w||y (]R+/X ),forall (u,6) G V(R+,X) x 0. 

For the proof of the next result we refer to Corollary 3.24 in (Sasu, 2008) or, alternatively, to 
Theorem 5.1 in (Megan et al, 2005). 

Theorem 2.10. Let V G {Q(R + ,R),C (R + ,R),C o(IR+,R)}. The system (S n ) is uniformly 
exponentially stable if and only if (S n ) is completely (V(R+, X), V(R+, X))-stable. 

Remark 2.11. Let W G 7e(R+) U {C (R+,X),C o(R+,X),C & (R + ,X)}. If the system (S n ) is 
uniformly exponentially stable then for every 6 G the linear operator 

P e w : W(R+,X) -+ W(R+,X), (P e w u)(t)= / <&(cr(e,s),t - s)u(s) ds 

is correctly defined and bounded. Moreover, if A > is given by Definition 2.7 or respectively 
by Definition 2.9, then we have that sup 0G0 | |PJy| | < A. 

These results have several interesting applications in control theory among we mention those 
concerning the robustness problems (see (Sasu, 2008)) which lead to an inedit estimation of 
the lower bound of the stability radius, as well as to the study of the connections between 
stability and stabilizability and detectability of associated control systems, as we will see in 
what follows. It worth mentioning that these aspects were studied for the very first time for 
the case of systems associated to evolution operators in (Clark et al., 2000) and were extended 
for linear skew-product flows in (Megan et al., 2002). 



446 Robust Control, Theory and Applications 

3. Stabilizability and detectability of variational control systems 

As stated from the very beginning, in this section our attention will focus on the connections 
between stabilizability, detectability and the uniform exponential stability Let X be a Banach 
space, let (0,d) be a metric space and let n = (0,cr) be a linear skew-product flow on 8 = 
X x 0. We consider the variational integral system 

(Sn) x e (t;x ,u) =®(e,t)x + [ ®M6,s),t-s)u(s)ds, £>O,0G0 

Jo 

with u G L / 1 0C (R+,X) and x G X. 

Let U, Y be Banach spaces and let B G C s (0, B(U, X)), C G C s (0, B(X, Y)). We consider the 
variational control system (n,B,C) described by the following integral model 

( x(6, t, x , u) = <S>(0, t)x + f* ®((r(6, s), t - s)B(cr(e, s))u(s) ds 

[y(0,t,x o ,u) = C(or(0,t))x(9,t,x o ,u) 

where t > 0, (x ,6) e £ and u G ^/ 1 0C (IR+, U). 

According to Definition 1.5 it is obvious that if the system (S n ) is uniformly exponentially 
stable, then the control system (n,B,C) is stabilizable (via the trivial feedback F = 0) and 
this is also detectable (via the trivial feedback K = 0). The natural question arises whether the 
converse implication holds. 

Example 3.1. Let X = R,0 = IR and let <r{Q,t) = + t. Let (S n ) be a variational integral 
system such that <b(Q,t) = I d (the identity operator on X), for all (0,t) G x R + . Let U = 
Y = X and let B(Q) = C(0) = I d , for all 6 G 0. Let 5 > 0. By considering F(0) = -^ I d , for all 
G 0, from relation (1.1), we obtain that 



Jo 



® BF (6,t)x = x-5 I ® BF (6,s)xds, Vf>0 

for every (x,6) G S. This implies that ® BF (6,t)x = e~ st x, for alH > and all (x,6) G S, 

so the perturbed system (S nBF ) is uniformly exponentially stable. This shows that the system 

(n,B,C) is stabilizable. 

Similarly, if 8 > 0, for K(9) = —S Id, for all G 0, we deduce that the variational control 

system [n, B, C) is also detectable. 

In conclusion, the variational control system (n,B,C) is both stabilizable and detectable, but 

for all that, the variational integral system (S n ) is not uniformly exponentially stable. 

It follows that the stabilizability or /and the detectability of the control system (n,B,C) are 
not sufficient conditions for the uniform exponential stability of the system (S n ). Naturally, 
additional hypotheses are required. In what follows we shall prove that certain input-output 
conditions assure a complete resolution to this problem. The answer will be given employing 
new methods based on function spaces techniques. 
Indeed, for every 6 G 0, we define 

P e : 4 C (R+,X) -+ 4 C (R+,X), (P e w)(t) = J*®((r(e,s),t-s)w(s) ds 



On Stabilizability and Detectability of Variational Control Systems 447 

and respectively 

B e : LJnCR+,11) - 4c( K +- x )' (B 9 ")( f ) = B(a(0,t))u(t) 

C e : 4, C (R+,X) - LJ 0C (R + ,Y), (C e v)(t) = C(a(9,t))v(t). 

We also associate with the control system S = (n,B,C) three families of input-output 
mappings, as follows: the left input-output operators {L e } ee ® defined by 

L e : LJJR+, U) -> L} 0C (TR + , X), L e := P B 

the right input-output operators {R e }ee© given by 

R e : L | 1 0C (R + , X) - L | 1 0C (R + , Y), £ := C^P 

and respectively the global input-output operators {G^}^© defined by 

G : L^ 0C (R+, LI) -> 4, C (R +/ Y), G := C^P^B . 

A fundamental stability concept for families of linear operators is given by the following: 

Definition 3.2. Let Z\, Z 2 be two Banach spaces and let W £ T(R+) be a Banach function 

space. A family of linear operators {O e : L | 1 oc (IR + ,Z 1 ) — > 1^(11+, Z 2 )}0 G0 is said to be 

(W(R+, Zi), W(R + , Z2)) -stable if the following conditions are satisfied: 

(i) for every oc x £ WCR+rZ^ and every 6 £ 0, 0% £ W(R+,Z 2 ); 

(n) there ism > such that | |O ^i 1 1 W ( ]R+ Zz ) < w lkillw(R+ z x )> for a11 a i G W(R+,Z!) and 

all e 0. 

Thus, we observe that if W € 7£(R+), then the variational integral system (S n ) is uniformly 
exponentially stable if and only if the family {P e }ee© is (W(R+,X), W(R+,X)) -stable (see 
also Remark 2.11). 

Remark 3.3. Let Z\, Zi be two Banach spaces and let W £ T(R+) be a Banach function space. 
If Q £ C s (0, B(Zi, Z 2 )) then the family {Q% G0 defined by 

Q e : 4 C (R +/ Zi) - 4 C (R+,Z 2 ), (Q*«)(0 = Q(^, *))*(') 

is (W(R+,Zi), W(R+,Z 2 )) -stable. Indeed, this follows from Definition 2.1 (iv) by observing 
that 

ll(Q**)(OII < IIIQIII IWOII, Vf > o,v* g w(R+,z 1 ),V0 £ 0. 

The main result of this section is: 

Theorem 3.4. Let Wbea Banach function space such that W £ 7£(R+). The following assertions are 

equivalent: 

(i) the variational integral system (S n ) is uniformly exponentially stable; 

(ii) the variational control system (n,B,C) is stabilizable and the family of the left input-output 

operators {L e } 9e@ is (W(R+ / U r ) / W r (R+ / X))-stefck; 

(Hi) the variational control system (n,B,C) is detectable and the family of the right input-output 

operators {R e } ee © is (W r (R +/ X) / W(R+ / Y))-steJbk 

(iv) the variational control system (tc,B,C) is stabilizable, detectable and the family of the global 

input-output operators {G e } 9e@ is (W(R + ,L7), W(R+, Y))-stable. 



448 Robust Control, Theory and Applications 

Proof. We will independently prove each equivalence (i) <^=> (ii), (i) -<=> (Hi) and 
respectively (z) «=>• (iv). Indeed, we start with the first one and we prove that (i) => (ii). 
Taking into account that (S n ) is uniformly exponentially stable, we have that the family 
{P 6 heG is (W(R+,X), W(R+,X))-stable. In addition, observing that 

||(L^)(0|| < sup ||P || HIBIII HiiWIl V« G W(R+,U),V0 G 

ee© 

from Definition 2.1 (iv) we deduce that that the family {L e }o e @ is 

(W(R+,lf),W(R+,X))-stable. 

To prove the implication (ii) => (i), let F G C S (S / B(X / U)) be such that the 

system (S nBF ) is uniformly exponentially stable. It follows that the family {H e }Q E @ is 

( W(R+, X), W(R+, X) ) -stable, where 

H e : L} 0C (R+,X) -► L | 1 0C (R + ,X), (H e u)(t) = / <b BT (p-{p,s),t-s)u(s)ds, t > 0,6 G 0. 
For every G let 

F : 4 C (R + ,X) - 4, C (R+,U), (F*u)(f) = F(cr(0,O)u(*). 

Then from Remark 3.3 we have that the family {F e } ee@ is (W(R+, X), W(R+, U)) -stable. 
Let G and let w G L | 1 oc (R + , X). Using Fubini's theorem and formula (1.1), we successively 
deduce that 

(L e F°H e u)(t)= f f ®M6,s),t-s)B(cr(6 / s))F(a(6,s))® BF (cr(e,T),s-T)u(T)dTds = 
Jo Jo 

= f f ®(a(e,s),t - s)B(a(6,s))F(a(6,s))® BF (a(6,T),s - t)u(t) dsdr = 



T ^(a(0 / T + ^J-T-^B(a(0 / T + ^)F(a(0 / T + ^)^ BF (a(0 / T) / ^u(T)d^dT: 



If 

Jo Jo 



= f [® BF (cr(6,T),t-T)u(T)-®(0-(6,T),t-T)u(T)]dT = 

= (H e u)(t)-(P e u)(t), W>0. 

This shows that 

P e u = H e u - L Q F Q H Q u, \/u G L} 0C (K + ,X),\/6 G 0. (3.1) 

Let m\ and m^_ be two constants given by Definition 3.2 (ii) for {H^}^ G @ and for {L^}^ G @, 
respectively. From relation (3.1) we deduce that P e u G W(R+,X), for every u G W(R+,X) 
and 

\\P 6 u\\w(K + ,x) < m 1 (l + m 2 |||F|||) \\u\\ w{K+/X) , Vu G W(R + ,X),V0 G 0. 

From the above relation we obtain that the family {P e }ee& is (W(R+, X), W(R+, X))-stable, 

so the system (S^) is uniformly exponentially stable. 

The implication (i) => (Hi) follows using similar arguments with those used in the proof 

of (i) => (ii). To prove (Hi) =^ (i), let K G C 8 (0,B(Y,X)) be such that the system (S nKC ) 

is uniformly exponentially stable. Then, the family {T e } ee@ is ( W(R + , X), W(R+, X))-stable, 

where 

T : 4 C (R+,X) - 4 C (R+,X), (1**0(0 = J*<P KC (cr(e,s),t-s)u(s)ds. 



On Stabilizability and Detectability of Variational Control Systems 449 

For every 0G0 we define 

K e : L} 0C (R + ,Y) -+ 4 C (R + ,X), (K e u)(t) = K(a(Q f t))u(t). 

From Remark 3.3 we have that the family {K e } ee@ is (W(R + , Y), W(R+, X)) -stable. 

Using Fubini's theorem and the relation (1.2), by employing similar arguments with those 

from the proof of the implication (ii) =>• (i), we deduce that 

P e u = T e u - Y e K e R e u, Mu G L} 0C (R+,X),Ve G 0. (3.2) 

Denoting by <ft and by ^2 some constants given by Definition 3.2 (ii) for {T e }Q e ® and for 
{R e }ee©' respectively, from relation (3.2) we have that P e u G W(R+,X), for every u G 

W(R+,X)and 

ll^llw(R + ,X) < qi(l + q 2 \\\K\\\) |M| W(R+/X ), Vu € W(R + ,X),V0 G 0. 

Hence we deduce that the family {P e }ee© is (W(R+, X), W(R+, X))-stable, which shows that 
the system (S n ) is uniformly exponentially stable. 

The implication (i) => (iv) is obvious, taking into account the above items. To prove that 
(iv) =^ (/), let K G C s (0, B(Y, X)) be such that the system (S 7lKC ) is uniformly exponentially 
stable and let {K e }Q E @ and {r^}^ G be defined in the same manner like in the previous stage. 
Then, following the same steps as in the previous implications, we obtain that 

L e u = Y e B e u - T e K e G e u, \/u G L | 1 oc (R + ,X),V0 G 0. (3.3) 

From relation (3.3) we deduce that the family {L e } ee@ is (W(R+, U), W(R+,X))-stable. 
Taking into account that the system (n,B,C) is stabilizable and applying the implication 
(ii) =^ (i), we conclude that the system (S n ) is uniformly exponentially stable. □ 

Corollary 3.5. Let V G {Q(R + ,R),C (R+,R),C o(R+,R)}. The following assertions are 

equivalent: 

(i) the variational integral system (S n ) is uniformly exponentially stable; 

(ii) the variational control system (n r B,C) is stabilizable and the family of the left input-output 

operators {L e } 9e@ is (V(R+,U),V(R+,X))-stable; 

(Hi) the variational control system (tc,B,C) is detectable and the family of the right input-output 

operators {R e } ee ® is (V(R+,X),V(R+,Y))-stable 

(iv) the variational control system (n,B,C) is stabilizable, detectable and the family of the global 

input-output operators {G 6 }q e @ is (V(K +/ U),V(K +/ Y)) -stable. 

Proof. This follows using similar arguments and estimations with those from the proof of 
Theorem 3.4, by applying Theorem 2.10. □ 

4. Applications to nonautonomous systems 

An interesting application of the main results from the previous section is to deduce necessary 
and sufficient conditions for uniform exponential stability of nonautonomous systems in 
terms of stabilizability and detectability. For the first time this topic was considered in (Clark 
et al., 2000)). We propose in what follows a new method for the resolution of this problem 
based on the application of the conclusions from the variational case, using arbitrary Banach 
function spaces. 
Let X be a Banach space and let !# denote the identity operator on X. 



450 Robust Control, Theory and Applications 

Definition 4.1. A family U = {L7(f,s)}^> s >o C B(X) is called an evolution family if the 

following properties hold: 

(i) U(t,t) = I d andl7(£,s)l7(s,£ ) = LT(f,f ), for alH > s > t > 0; 

(«) there are M > 1 and a; > such that 1 1 U(t, s) \ \ < Me w ^- S \ for all f > s > t > 0; 

(in) for every x G X the mapping (t,s) \-^ U(t,s)x is continuous. 

Remark 4.2. For every P G C S (R+, £>(X)) (see e.g. (Curtain & Zwart, 1995)) there is a unique 
evolution family Up = {Up(t,s)} t > s >o such that the variation of constants formulas hold: 

U P (t,s)x = U(t,s)x+ J U(t,T)P(T)U P (T,s)xdT, W>s >0,VxGX 

and respectively 

U P (t,s)x= U(t,s)x+ J U P (t,T)P{T)U{T,s)xdT, \/t >s >0,VxG X. 

Let U = {U{t, s) }t> s >Q be an evolution family on X. We consider the nonautonomous integral 
system 

(Sy) x s (t',XQ,u) = U(t,s)xQ+ I U(t / T)u(r) dr, t>s,s>0 

with u G L / 1 0C (R+,X) and x G X. 

Definition 4.3. The system (Sy) is said to be uniformly exponentially stable if there are N, v > 
such that ||x s (*;x ,0)|| < Mr v ('- S )||;eo||,for all* > s > and all x G X. 

Remark 4.4. The system (Sy ) is uniformly exponentially stable if and only if there are N, v > 

such that ||lT(*,s)|| < Ne- V ^~ s \ for all* > s > 0. 

Definition 4.5. Let W G T(R+). The system (S w ) is said to be completely (W(R+,X), 
W(R+, X))-stable if for every w G W(R+,X), the solution x (-;0,w) G W(R+,X). 

Remark 4.6. If the system (S w ) is completely (W(R+,X), W(R+,X)) -stable, then it makes 
sense to consider the linear operator 

? : W(R+,X) -► W(R+,X), ?{u) = x (')0,u). 

It is easy to see that IP is closed, so it is bounded. 

Let now U,Y be Banach spaces, let B G C S (R+, 23(17, X)) and let C G C S (R+,H(X, Y)). We 
consider the nonautonomous control system (U,B,C) described by the following integral 
model 

(x s (t;x ,u) = U(t,s)x + f*U(t,T)B(r)u(T) dz, t > s, s > 
y s (t;xQ,u) = C(t)x s (t;xQ,u), t > s, s > 

with u G Lj- 0C (R +/ LT),x G X. 

Definition 4.7. The system (£Y, B, C) is said to be: 

(i) stabilizable if there exists F G C S (R+,#(X, U)) such that the system (Sy BF ) is uniformly 

exponentially stable; 

(ii) detectable if there exists G G C S (R+,23(Y,X)) such that the system (Sy GC ) is uniformly 

exponentially stable. 



On Stabilizability and Detectability of Variational Control Systems 451 

We consider the operators 

B : 4, C (R+,(I) - 4, C (]R + ,X), (Su)(f) = B(t)u(t) 

C : L^(R+,X) -> 4 C (R+,Y), (e«)(f) = B(f)«(0 

and we associate with the system (U, B, C) three input-output operators: the left input-output 
operator defined by 

£ : Ll c (R+, U) -* LJJR+, X), L = TO 

the right input-output operator given by 

a : l} oc (r +i x) - l} oc (r +i y), % = &y 

and respectively the global input-output operator defined by 

S : LJ 0C (R+, U) -> 4, C (R+, Y), S = CTO. 

Definition 4.8. Let 7,\,Zi be two Banach spaces and let W G T(R+) be a Banach 
function space. An operator Q : L | 1 oc (IR + ,Z 1 ) —> L | 1 oc (R + ,Z2) is said to be 
(W(R+,Z 1 ),W(R+,Z 2 ))-stable\£ for every A G W(R+,Zi) the function Q A G W(R+,Z 2 ). 

The main result of this section is: 

Theorem 4.9. Let W be a Banach function space such that B G 7£(R+). The following assertions are 

equivalent: 

(i) the integral system (Sy) is uniformly exponentially stable; 

(ii) the control system (U,B,C) is stabilizable and the left input-output operator L is 

(W(R + ,U),W(R + ,X))-stable; 

(Hi) the control system (U,B,C) is detectable and the right input-output operator % is 

( W(R+, X), W(R+, Y))-stable; 

(iv) the control system (U, B, C) is stabilizable, detectable and the global input-output operator S is 

(W(R+,U),W(R+,Y))-stable. 

Proof. We prove the equivalence (i) <^=> (ii) , the other equivalences: (i) -<=> (Hi) and (i) -<=> 
(iv) being similar. 

Indeed, the implication (z) =>• (ii) is immediate. To prove that (ii) => (i) let = R+, 
a : x R + -> ®,o-(Q,t) = 6 + t and let ®(0,t) = U(t + 9,6), for all (0,f) G x R + . Then 
7i = (<3>, c) is a linear skew-product flow and it makes sense to associate with n the following 
integral system 

(S n ) x e (t;x ,u) =®(e,t)x + [ <&M6,s),t-s)u(s)ds, £>O,0G0 

Jo 

with u G L | 1 oc (R + ,X) and x ^ x - 

We also consider the control system (n,B,C) given by 

!x(6,t,x ,u) = <&(e,t)xQ + f t Q <&(o-(e,s) / t-s)B(o-(6 / s))u(s)ds 
y(6,t,x ,u) = C(o-(6,t))x(6,t,x ,u) 

where t > 0, (xq,0) G £ and w G Lj- 0C (R+, IT). For every 6 G we associate with the system 
(zr, B, C) the operators P , B^ and L e using their definitions from Section 3. 



452 Robust Control, Theory and Applications 

We prove that the family {L e } ee@ is (W(R + ,U), W(R +/ X)) -stable. Let 6 G and let a G 
W(R+, 17). Since W is invariant to translations the function 

belongs to W(R+, 17) and \M\w(iR +/ u) = \M\w(R+,u)- Since tne operator £ is 
(W(R+, 17), W(R+, X))-stable we obtain that the function 

<p:R+^X, ?(*) = (£**)(*) 
belongs to W(R+, X). Using Remark 2.4 (ii) we deduce that the function 

7:R + ->X, j(t) = <p(t + Q) 

belongs to W(R+,X) and ||tIIw(r +/ x) < ll<pllw(R +/ x)- We observe that 

(L e oc)(t) = / U(6 + t,0 + s)B(6 + s)oc(s) ds = / U(6 + t f r)B(r)oc(r - 6) dr = 

r e+t 
= / U(e + t,r)B(r)oc e (r)dr=(Loc )(e + t)=y(t), \/t > 0. 

This implies that L^a: belongs to W(R+, X) and 

ll L ^llw(R +/ X) = ll7llw(R +/ X) < ll<pllw(R +/ X) < 

<ll^llll^llw(R + ,LI) = ll^lllkllw(R + ,L7)- (4-1) 

Since 6 G and a G W(R+, 17) were arbitrary from (4.1) we deduce that the family {L e }g e @ 
is (W(R + ,l7),W(R+,X))-stable. 

According to our hypothesis we have that the system (U, B, C) is stabilizable. Then there is 
F G C s (R+,i3(X, 17)) such that the (unique) evolution family U B f = {U B p(t,s)} t > s >o which 
satisfies the equation 

U BF (t,s)x = U(t,s)x+ J U{t,T)B{T)F{T)U BF (T,s)xdT, W>s>0,VxGX (4.2) 
has the property that there are N, v > such that 

||LZ B f(^s)|| < Ne~ v ^- S \ \/t > s > 0. (4.3) 

For every (0,i) G x R+, let <b(6,t) := 17 B f(0 + £,0). Then, we have that 7t = (0>,<7-) is a 
linear skew-product flow. Moreover, using relation (4.2) we deduce that 

f ®(<r(6,s),t- s)BM6,s))F((r(6,s))&(6,s)x ds = 
Jo 

= f U(6 + t,0 + s)B(0 + s)F(0 + s)U BF (6 + s,6)xds = 
Jo 

r 6+t 
= / U(6 + t,T)B(T)F(T)U B¥ (T / e)xdT = 
J 6 



On Stabilizability and Detectability of Variational Control Systems 453 

= U BF (0 + t,0)x - U(0 + t,Q)x = <b(0,t)x - <$>(6,t)x (4.4) 

for all (6, t ) e x R + and all x e X. According to Theorem 2.1 in (Megan et al, 2002), from 
relation (4.4) it follows that 

O>(0, i) = O BF (0, t), V(0, t) e x R + 

so ft = ttbf- Hence from relation (4.3) we have that 

I|3>bf(M)II = \\U BF (6 + t,6)\\ <Ne~ vt , \/t > 0,V6> e 

which shows that the system (S^) is uniformly exponentially stable. So the system (n, B, C) 

is stabilizable. 

In this way we have proved that the system (S n ) is stabilizable and the associated left 

input-output family {L e } 9e@ is (W(R+, U), W(R+, X)) -stable. By applying Theorem 3.4 we 

deduce that the system (S n ) is uniformly exponentially stable. Then, there are N,5 > such 

that 

||O(0/OII < Ne"^, Vf >O,V0G0. 
This implies that 

||L7(f,s)|| = ||<D(s,*-s)|| <Ne- s ^~ s \ \/t > s > 0. (4.5) 

From inequality (4.5) and Remark 4.4 we obtain that the system (Sn) is uniformly 
exponentially stable. □ 

Remark 4.10. The version of the above result, for the case when W = I/(R + ,R) with p G 
[1, oo), was proved for the first time by Clark, Latushkin, Montgomery-Smith and Randolph 
in (Clark et al., 2000) employing evolution semigroup techniques. 

The method may be also extended for spaces of continuous functions, as the following result 
shows: 

Corollary 4.11. Let V e {C^(R + ,R),C (R+,R),C o(R+,R)}. The following assertions are 

equivalent: 

(i) the system (Sy) is uniformly exponentially stable; 

(ii) the system (U,B,C) is stabilizable and the left input-output operator £ is 

(V(R+,U),V(R+,X))-stable; 

(Hi) the system (U,B,C) is detectable and the right input-output operator % is 

(V(R+,X),V(R+,Y))-stable; 

(iv) the system (U,B,C) is stabilizable, detectable and the global input-output operator S is 

(V(K + ,U),V(K + ,Y))-stable. 

Proof. This follows using Corollary 3.5 and similar arguments with those from the proof of 
Theorem 4.9. □ 

5. Conclusions 

Stabilizability and detectability of variational /nonautonomous control systems are two 
properties which are strongly related with the stable behavior of the initial integral system. 
These two properties (not even together) cannot assure the uniform exponential stability of the 
initial system, as Example 3.1 shows. But, in association with a stability of certain input-output 
operators the stabilizability or /and the detectability of the control system (tz,B,C) imply 



454 Robust Control, Theory and Applications 

the existence of the exponentially stable behavior of the initial system (S n ). Here we have 
extended the topic from evolution families to variational systems and the obtained results are 
given in a more general context. As we have shown in Remark 2.6 the spaces involved in the 
stability properties of the associated input-output operators may be not only V 3 -spaces but 
also general Orlicz function spaces which is an aspect that creates an interesting link between 
the modern control theory of dynamical systems and the classical interpolation theory. 
It worth mentioning that the framework presented in this chapter may be also extended 
to some slight weak concepts, taking into account the main results concerning the uniform 
stability concept from Section 3 in (Sasu, 2008) (see Definition 3.3 and Theorem 3.6 in 
(Sasu, 2008)). More precisely, considering that the system (n,B,C) is weak stabilizable 
(respectively weak detectable) if there exists a mapping F G C S (®,B(X,U)) (respectively 
K G C S (®,B(Y,X))) such that the system (S nBF ) (respectively (S nKC )) is uniformly stable, then 
starting with the result provided by Theorem 3.6 in (Sasu, 2008), the methods from the present 
chapter may be applied to the study of the uniform stability in terms of weak stabilizability 
and weak detectability. In authors opinion, the technical trick of the new study will rely on 
the fact that in this case the families of the associated input-output operators will have to be 
(L 1 ,L°°)-stable. 

6. References 

Bennett, C. & Sharpley, R. (1988). Interpolation of Operators, Pure Appl. Math. 129, ISBN 

0-12-088730-4 
Clark, S.; Latushkin, Y.; Montgomery-Smith, S. & Randolph, T. (2000). Stability radius and 

internal versus external stability in Banach spaces: an evolution semigroup approach, SIAM 

J. Control Optim. Vol. 38, 1757-1793, ISSN 0363-0129 
Curtain, R. & Zwart, H. J. (1995). An Introduction to Infinite-Dimensional Linear Control Systems 

Theory, Springer- Verlag, New- York, ISBN 0-387-94475-3 
Megan, M; Sasu, A. L. & Sasu, B. (2002). Stabilizability and controllability of systems associated to 

linear skew-product semiflows, Rev. Mat. Complutense (Madrid) Vol. 15, 599-618, ISSN 

1139-1138 
Megan, M.; Sasu, A. L. & Sasu, B. (2005). Theorems of Perron type for uniform exponential stability 

of linear skew-product semiflows, Dynam. Contin. Discrete Impulsive Systems Vol. 12, 

23-43, ISSN 1201-3390 
Sasu, B. & Sasu, A. L. (2004). Stability and stabilizability for linear systems of difference equations, J. 

Differ. Equations Appl. Vol. 10, 1085-1105, ISSN 1023-6198 
Sasu, B. (2008). Robust stability and stability radius for variational control systems, Abstract Appl. 

Analysis Vol. 2008, Article ID 381791, 1-29, ISSN 1085-3375 



20 

Robust Linear Control of Nonlinear Flat Systems 

Hebertt Sira-Ramirez 1 , John Cortes-Romero 1,2 and Alberto Luviano-Juarez 1 
1 Cinvestav IPN, Av. IPNNo. 2508, Departamento de Ingenieria Electrica, 

Section de Mecatronica 

2 Universidad National de Colombia. Facultad de Ingenieria, Departamento de 

Ingenieria Electrica y Electronica. Carrera 30 No. 45-03 Bogota, Colombia 

1 Mexico 
2 Colombia 



1. Introduction 

Asymptotic estimation of external, unstructured, perturbation inputs, with the aim of exactly, 
or approximately, canceling their influences at the controller stage, has been treated in the 
existing literature under several headings. The outstanding work of professor CD. Johnson 
in this respect, under the name of Disturbance Accommodation Control (DAC), dates from the 
nineteen seventies (see Johnson (1971)). Ever since, the theory and practical aspects of DAC 
theory have been actively evolving, as evidenced by the survey paper by Johnson Johnson 
(2008). The theory enjoys an interesting and useful extension to discrete-time systems, as 
demonstrated in the book chapter Johnson (1982). In a recent article, by Parker and Johnson 
Parker & Johnson (2009), an application of DAC is made to the problem of decoupling 
two nonlinear ly coupled linear systems. An early application of disturbance accommodation 
control in the area of Power Systems is exemplified by the work of Mohadjer and Johnson 
in Mohadjer & Johnson (1983), where the operation of an interconnected power system is 
approached from the perspective of load frequency control. 

A closely related vein to DAC is represented by the sustained efforts of the late Professor 
Jingqing Han, summarized in the posthumous paper, Han Han (2009), and known as: Active 
Disturbance Estimation and Rejection (ADER). The numerous and original developments of 
Prof. Han, with many laboratory and industrial applications, have not been translated into 
English and his seminal contributions remain written in Chinese (see the references in Han 
(2009)). Although the main idea of observer-based disturbance estimation, and subsequent 
cancelation via the control law, is similar to that advocated in DAC, the emphasis in ADER 
lies, mainly, on nonlinear observer based disturbance estimation, with necessary developments 
related to: efficient time derivative computation, practical relative degree computation and 
nonlinear PID control extensions. The work, and inspiration, of Professor Han has found 
interesting developments and applications in the work of Professor Z. Gao and his colleagues 
( see Gao et al. (2001), Gao (2006), also, in the work by Sun and Gao Sun & Gao (2005) and 
in the article by Sun Sun (2007)). In a recent article, a closely related idea, proposed by Prof. 
M. Fliess and C. Join in Fliess & Join (2008), is at the core of Intelligent PID ControlQPlDC). 
The mainstream of the IPIDC developments makes use of the Algebraic Method and it 
implies to resort to first order, or at most second order, non-phenomenological plant models. 
The interesting aspect of this method resides in using suitable algebraic manipulations to 



456 Robust Control, Theory and Applications 

locally deprive the system description of the effects of nonlinear uncertain additive terms 
and, via further special algebraic manipulations, to efficiently identify time-varying control 
gains as piece-wise constant control input gains (see Fliess et al. (2008)). An entirely algebraic 
approach for the control of synchronous generator was presented in Fliess and Sira-Ramirez, 
Sira-Ramirez & Fliess (2004). 

In this chapter, we advocate, within the context of trajectory tracking control for nonlinear 
flat systems, the use of approximate, yet accurate, state dependent disturbance estimation 
via linear Generalized Proportional Integral (GPI) observers. GPI observers are the dual 
counterpart of GPI controllers, developed by M. Fliess et al. in Fliess et al. (2002). A high 
gain GPI observer naturally includes a, self-updating, lumped, time-polynomial model of 
the nonlinear state-dependent perturbation; it estimates it and delivers the time signal to 
the controller for on-line cancelation while simultaneously estimating the phase variables 
related to the measured output. The scheme is, however, approximate since only a small as 
desired reconstruction error is guaranteed at the expense of high, noise-sensitive, gains. The 
on-line approximate estimation is suitably combined with linear, estimation-based, output 
feedback control with the appropriate, on-line, disturbance cancelation. The many similarities 
and the few differences with the DAC and ADER techniques probably lie in 1) the fact that we 
do not discriminate between exogenous (i.e., external) unstructured perturbation inputs and 
endogenous (i.e., state-dependent) perturbation inputs in the nonlinear input-output model. 
These perturbations are all lumped into a simplifying time-varying signal that needs to be 
linearly estimated. Notice that plant nonlinearities generate time functions that are exogenous 
to any observer and, hence, algebraic loops are naturally avoided 2) We emphasize the natural 
possibilities of differentially flat systems in the use of linear disturbance estimation and linear 
output feedback control with disturbance cancelation (For the concept of flatness see Fliess et 
al Fliess et al. (1995)) and the book Sira-Ramirez & Agrawal (2004). 

This chapter is organized as follows: Section 2 presents an introduction to linear control 
of nonlinear differentially flat systems via (high-gain) GPI observers and suitable linear 
controllers feeding back the phase variables related to the output function. The single 
input-single output synchronous generator model in the form a swing equation, is described 
in Section 3. Here, we formulate the reference trajectory tracking problem under a number 
of information restrictions about the system. The linear observer-linear controller output 
feedback control scheme is designed for lowering the deviation angle of the generator. We 
carry out a robustness test regarding the response to a three phase short circuit. We also carry 
an evaluation of the performance of the control scheme under significant variations of the two 
control gain parameters required for an exact cancelation of the gain. Section 4 is devoted to 
present an experimental illustrative example concerning the non-holonomic car which is also 
a multivariable nonlinear system with input gain matrix depending on the estimated phase 
variables associated with the flat outputs. 

2. Linear GPI observer-based control of nonlinear systems 

Consider the following perturbed nonlinear single-input single input-output, smooth, 
nonlinear system, 

yW=xp(t,y,y y^) +cp{t,y)u + £(f) (1) 

The unperturbed system, (£(£) = 0) is evidently flat, as all variables in the system are 
expressible as differential functions of the flat output y. 

We assume that the exogenous perturbation £(£) is uniformly absolutely bounded, i.e., it 
an Loo scalar function. Similarly, we assume that for all bounded solutions, y(t), of (1), 



Robust Linear Control of Nonlinear Flat Systems 457 

obtained by means of suitable control input u, the additive, endogenous, perturbation input, 
xp(t,y(t),y(t),...,y( n ~ 1 \t)), viewed as a time signal is uniformly absolutely bounded. 
We also assume that the nonlinear gain function (p(t, y (t ) ) is Loo and uniformly bounded away 
from zero, i.e., there exists a strictly positive constant ]i such that 

inf|<Hf,J/(f))|> \i (2) 

for all smooth, bounded solutions, y(t), of (1) obtained with a suitable control input u. 

Although the results below can be extended when the input gain function (p depends on 

the time derivatives of y, we let, motivated by the synchronous generator case study to be 

presented, (p to be an explicit function of time and of the measured flat output y. This is 

equivalent to saying the (p(t,y(t)) is perfectly known. 

We have the following formulation of the problem: 

Given a desired flat output reference trajectory, y*(t), devise a linear output feedback controller for 

system (1) so that regardless of the endogenous perturbation signal ip(t,y(t),y(t), ...,y( n ~ l \t)) and 

of the exogenous perturbation input £(£), the flat output y tracks the desired reference signal y* (t) even 

if in an approximate fashion. This approximate character specifically means that the tracking error, 

e (0 = V ~ 3/*(0/ m & its first, n, time derivatives, globally asymptotically exponentially converge 

towards a small as desired neighborhood of the origin in the reference trajectory tracking error phase 

space. 

The solution to the problem is achieved in an entirely linear fashion if one conceptually 

considers the nonlinear model (1) as the following linear perturbed system 

yW=v + S(t) (3) 

where v = <p{t,y)u, and £(*) = tp(t,y(t),y(t), ...y^" 1 ^)) + £(*)■ 
Consider the following preliminary result: 

Proposition 1. The unknown perturbation vector of time signals, %(t), in the simplified tracking error 
dynamics (3), is observable in the sense ofDiop and Fliess (see Diop & Fliess (1991))). 

Proof The proof of this fact is immediate after writing (3) as 

m = y (n) -v = y {n) -<p{t,y)u (4) 

i.e., £(£) can be written in terms of the output vector y, a finite number of its time derivatives 
and the control input u. Hence, £(£) is observable. 

Remark 2. This means, in particular, that if g(t) is bestowed with an exact linear model; an exact 
asymptotic estimation of %{i) is possible via a linear observer. If, on the other hand, the linear model 
is only approximately locally valid, then the estimation obtained via a linear observer is asymptotically 
convergent towards an equally approximately locally valid estimate. 

We assume that the perturbation input g(t) may be locally modeled as a p — 1-th degree time 
polynomial z\ plus a residual term, r(t), i.e., 

£(t) = z 1 + r(t) = a + a x t + • • • + flp-i^" 1 + r(t), for all t (5) 

The time polynomial model, Z\, (also called: a Taylor polynomial) is invariant with respect 
to time shifts and it defines a family of p — 1 degree Taylor polynomials with arbitrary real 



458 Robust Control, Theory and Applications 

coefficients. We incorporate Z\ as an internal model of the additive perturbation input (see 
Johnson (1971)). 

The perturbation model Z\ will acquire a self updating character when incorporated as part of a 
linear asymptotic observer whose estimation error is forced to converge to a small vicinity of 
zero. As a consequence of this, we may safely assume that the self-updating residual function, 
r(t ), and its time derivatives, say r^\t), are uniformly absolutely bounded. To precisely state 
this, let us denote by yy an estimate of y^' -1 ) for ;' = 1, ..., n. 
We have the following general result: 

Theorem 3. The GPI observer-based dynamical feedback controller: 

n-\ 



<p(t,y) 



[y*W] (n) -Eh-[y;-(y*W) (;) ])-ew 

;=0 



1(0 = *1 



(6) 



Vi =y2 + A p+n _i(y-yi) 
y 2 = y 3 + A p+n _ 2 (y-yi) 



y n = p + z 1 +A p (y-y 1 ) 
Z! = Z2 + A p _i(y-yi) 



Zp-i = z p + M(y-yi) 

Zp = k (y-y 1 ) (7) 

asymptotically exponentially drives the tracking error phase variables, e\ ' = y^> — [y*(0] / 
k = 0,1,.. ,n — 1 to an arbitrary small neighborhood of the origin, of the tracking error phase space, 
which can be made as small as desired from the appropriate choice of the controller gain parameters 
{kq, ...,K n _i}. Moreover, the estimation errors: e\ l > = y^ l > — y z -, i = 0, ...,n — 1 and the perturbation 
estimation error: z m — £ m_1 (£), m = l,...,p asymptotically exponentially converge towards a small 
as desired neighborhood of the origin of the reconstruction error space which can be made as small as 
desired from the appropriate choice of the controller gain parameters {Aq, ..., A p+H _i}. 

Proof The proof is based on the fact that the estimation error e satisfies the perturbed linear 
differential equation 

^+") + A p+n _ig^ +n - 1 ) + • • • + \ e = r {v) (t) (8) 

Since r^\t) is assumed to be uniformly absolutely bounded then there exists coefficients 
A^ such that e converges to a small vicinity of zero, provided the roots of the associated 
characteristic polynomial in the complex variable s: 

sP +n + Ap+n-is^"- 1 + • • • + Ais + A (9) 



Robust Linear Control of Nonlinear Flat Systems 459 

are all located deep into the left half of the complex plane. The further away from the 
imaginary axis, of the complex plane, are these roots located, the smaller the neighborhood 
of the origin, in the estimation error phase space, where the estimation error e will remain 
ultimately bounded (see Kailath Kailath (1979)). Clearly, if e and its time derivatives converge 
to a neighborhood of the origin, then Zj — £u\ j = 1,2,..., also converge towards a small 
vicinity of zero. 

The tracking error ey = y — y*(t) evolves according to the following linear perturbed 
dynamics 

4 B) + *»-i4" _1) + • • • + K ° e v = fto - ft') < 10 ) 

Choosing the controller coefficients {kq, • • • ,K n _i}, so that the associated characteristic 
polynomial 

S n +!C n _iS n - 1 + -..+K0 (11) 

exhibits its roots sufficiently far from the imaginary axis in the left half portion of the 
complex plane, the tracking error, and its various time derivatives, are guaranteed to converge 
asymptotically exponentially towards a vicinity of the tracking error phase space. Note that, 
according to the observer expected performance, the right hand side of (10) is represented 
by a uniformly absolutely bounded signal already evolving on a small vicinity of the origin. 
For this reason the roots of (11) may be located closer to the imaginary axis than those of 
(9). A rather detailed proof of this theorem may be found in the article by Luviano et al. 
Luviano-Juarez et al. (2010) 

Remark 4. The proposed GPI observer (7) is a high gain observer which is prone to exhibiting the 
"peaking" phenomena at the initial time. We use a suitable "clutch" to smooth out these transient 
peaking responses in all observer variables that need to be used by the controller. This is accomplished 
by means of a factor function smoothly interpolating between an initial value of zero and a final value 
of unity. We denote this clutching function as sAt) <G [0, 1] and define it in the following (non-unique) 
way 

/ n _ / 1 for t > e 
S / (t) -\sin4(g)forf<e (12) 

where a is a su itably large positive even integer. 

2.1 Generalized proportional integral observer with integral injection 

Let £(£) be a measured signal with an uniformly absolutely bounded iterated integral of order 
m. The function g(t) is a measured signal, whose first few time derivatives are required for 
some purpose. 

Definition 5. We say that a signal p\(t) converges to a neighborhood of g(t) whenever the error 
signal, £(t) — pi (t), is ultimately uniformly absolutely bounded inside a small vicinity of the origin. 

The following proposition aims at the design of a GPI observer based estimation of time 
derivatives of a signal, £(£), where £(t) is possibly corrupted by a zero mean stochastic 
process whose statistics are unknown. In order to smooth out the noise effects on the on-line 
computation of the time derivative, we carry out a double iterated integration of the measured 
signal, £(£), thus assuming the second integral of £(t) is uniformly absolutely bounded (i.e., 
m = 2). 



460 Robust Control, Theory and Applications 

Proposition 6. Consider the following perturbed second order integration system, where the input 
signal, %(t),isa measured (zero-mean) noise corrupted signal satisfying the above assumptions: 

Vo = 3/1/ Vl = S(0 (13) 

Consider the following integral injection GPI observer for (13) including an internal time polynomial 
model of degree r for the signal £(£) and expressed as pi, 

h = yi + A r+1 (y -y ) 

fa =pi + A r (yo-yo) 

pi =p2 + A r _i(y -y ) (14) 



pr = A (y -yo) (15) 

Then, the observer variables, p\, p2, P3,..., respectively, asymptotically converge towards a small as 
desired neighborhood of the disturbance input, £(£), and of its time derivatives: £(£), £(*)/••• provided 
the observer gains, { Aq, ..., A r+ 2}/ are chosen so that the roots of the polynomial in the complex variable 

s. 

P(s) = s r+2 + A r+1 s r+1 + • • • + Ais + A (16) 

are located deep into the left half of the complex plane. The further the distance of such roots from 
the imaginary axis of the complex plane, the smaller the neighborhood of the origin bounding the 
reconstruction errors. 

Proof. Define the twice iterated integral injection error as, e = yo — t/rj. The injection error 
dynamics is found to be described by the perturbed linear differential equation 

£ ( r+2 ) + A r+ iz {r+1) + • • ■ + Aie + A £ = £« (t) (17) 

By choosing the observer parameters, Aq, Ai, • • • , A r+ i, so that the polynomial (16) is Hurwitz, 
with roots located deep into the left half of the complex plane, then, according to well known 
results of solutions of perturbed high gain linear differential equations, the injection error e 
and its time derivatives are ultimately uniformly bounded by a small vicinity of the origin 
of the reconstruction error phase space whose radius of containment fundamentally depends 
on the smallest real part of all the eigenvalues of the dominantly linear closed loop dynamics 
(see Luviano et al. Luviano-Juarez et al. (2010) and also Fliess and Rudolph Fliess & Rudolph 
(1997)). □ 

3. Controlling the single synchronous generator model 

In this section, we advocate, within the context of the angular deviation trajectory control for 
a single synchronous generator model, the use of approximate, yet accurate, state dependent 
disturbance estimation via linear Generalized Proportional Integral (GPI) observers. GPI 
observers are the dual counterpart of GPI controllers, developed by M. Fliess et al. in 
Fliess et al. (2002). A high gain GPI observer naturally includes a, self-updating, lumped, 
time-polynomial model of the nonlinear state-dependent perturbation; it estimates it and 
delivers the time signal to the controller for on-line cancelation while simultaneously 
estimating the phase variables related to the measured output. The scheme is, however, 
approximate since only a small as desired reconstruction error is guaranteed at the expense 



Robust Linear Control of Nonlinear Flat Systems 461 

of high, noise-sensitive, gains. The on-line approximate estimation is suitably combined with 
linear, estimation-based, output feedback control with the appropriate, on-line, disturbance 
cancelation. The many similarities and the few differences with the DAC and ADER 
techniques probably lie in 1) the fact that we do not discriminate between exogenous (i.e., 
external) unstructured perturbation inputs and endogenous (i.e., state-dependent) perturbation 
inputs in the nonlinear input-output model. These perturbations are all lumped into a 
simplifying time-varying signal that needs to be linearly estimated. Notice that plant 
nonlinearities generate time functions that are exogenous to any observer and, hence, algebraic 
loops are naturally avoided 2) We emphasize the natural possibilities of differentially flat 
systems in the use of linear disturbance estimation and linear output feedback control with 
disturbance cancelation (For the concept of flatness see Fliess et al. Fliess et al. (1995)) and the 
book Sira-Ramirez & Agrawal (2004). 

3.1 The single synchronous generator model 

Consider the swing equation of a synchronous generator, connected to an infinite bus, with 
a series capacitor connected with the help of a thyristor bridge (See Hingorani Hingorani & 
Gyugyi (2000)), 

xi = x 2 

x 2 = P m ~ b x x 2 - b 2 x 3 sin(*i) 

*3 = h(-X3+4(*) + u + tt t )) ( 18 ) 

X\ is the load angle, considered to be the measured output. The variable, x 2 , is the deviation 
from nominal, synchronous, speed at the shaft, while X3 stands for the admittance of the 
system. The control input, u, is usually interpreted as a quantity related to the fire angle of 
the switch. £(t) is an unknown, external, perturbation input. The static equilibrium point of 
the system, which may be parameterized in terms of the equilibrium position for the angular 
deviation, X\, is given by, 

*i=si,x 2 = o,S3 = 3(0 = a^Efe < 19) 

We assume that the system parameters, b 2/ and, b$, are known. The constant quantities P m , b\ 
and the time varying quantity, x^(t), are assumed to be completely unknown. 

3.2 Problem formulation 

It is desired to have the load angular deviation, y = x\, track a given reference trajectory, y*(t) = 
x\ (t), which remains bounded away from zero, independently of the unknown system parameters and 
in spite of possible external system disturbances (such as short circuits in the three phase line, setting, 
momentarily, the mechanical power, P m , to zero), and other unknown, or un-modeled, perturbation 
inputs comprised in £(£). 

3.3 Main results 

The unperturbed system in (18) is flat, with flat output given by the load angle deviation 
y = x\ . Indeed, all system variables are differentially parameterizable in terms of the load 



462 Robust Control, Theory and Applications 

angle and its time derivatives. We have: 

*l =y 

*2 = y 

P m -hy-y 



*3 



fr 2 sin(y) 

hy + yW P m -hy-y 



b 3 b 2 sin(y) b 3 b 2 sin 2 (y) 



ycos(y) 



+ P -^y-y _ 

fr 2 sm(y) JW 

The perturbed input-output dynamics, devoid of any zero dynamics, is readily obtained 
with the help the control input differential parametrization (20). One obtains the following 
simplified, perturbed, system dynamics, including £(*), as: 

(21) 





y(V = -[b 3 b 2 sm(y)}u + Z(t) 


£(£) is given by 




m = 


*"ft-w-»('-d^ 




-&3&2sin(i/)(x|(f)+^(f)) 



(22) 

We consider %(t) as an unknown but uniformly absolutely bounded disturbance input that 

needs to be on-line estimated by means of an observer and, subsequently, canceled from the 

simplified system dynamics via feedback in order to regulate the load angle variable y towards 

the desired reference trajectory y*(t). It is assumed that the gain parameters b 2 and b 3 are 

known. 

The problem is then reduced to the trajectory tracking problem defined on the perturbed 

third order, predominantly, linear system (21) with measurable state dependent input gain 

and unknown, but uniformly bounded, disturbance input. 

We propose the following estimated state feedback controller with a smoothed (i.e., "clutched" 

) disturbance cancelation term, z\ s (i) = Sf(t)z\(t), and smoothed estimated phase variables 

yjs — s /(03/;(0/ ] = 1/ 2, 3 with Sf(t) as in equation (12) with a suitable e value. 



[(y*W) (3) -^(y3s-y*W) 



b 3 b 2 sin(y) 
-h(yis - fit)) - k (y - y*(0) - z ls ] 
The corresponding variables, y 3 , y 2 and Z\, are generated by the following linear GPI observer: 

y\ = y2 + A 5 (y-yi) 

y 2 = y3 + A 4 (y-yi) 

1/3 = -(&3&2sin(y))tt + Zi + A 3 (i/-i/i) 

z 1 = z 2 + A 2 (y-yi) 

z 2 = z 3 + Ai(y-yi) 

z 3 = Ao(y-yi) (23) 



Robust Linear Control of Nonlinear Flat Systems 463 

where y\ is the redundant estimate of the output y, y 2 is the shaft velocity estimate and 1/3 is 
the shaft acceleration estimate. The variable z\ estimates the perturbation input £(£) by means 
of a local, self updating, polynomial model of third order, taken as an internal model of the 
state dependent additive perturbation affecting the input-output dynamics (21). 
The clutched observer variables Z\ s , y 2s and y 3s are defined by 

«.=»/(^-/w={f 8(S) s;^ (24) 

with S being either z\ s , y 2s or V?>s 

The reconstruction error system is obtained by subtracting the observer model from the 

perturbed simplified linear system model. We have, letting e = e\ = y — \j\, e 2 = y — y 2/ 

etc. 









h 


= e 2 - \$e\ 












e 2 


= e 3 - \ 4: e 1 












h 


= ?(0 -zi- ^1 












z\ 


= z2 + A 2 (y-yi) 












Z2 


= Z3 + Ai(y-yi) 












Z3 : 


= Ao(y-yi) 




(25) 


The reconstruction 
dynamics 


error, 


e = e 1 = y - 

] + A 5 e^ + A. 


- yi, is seen to satisfy 

i^ + '-' + A^+Aoe 


the following linear, 


perturbed, 
(26) 


Choosing the gains 


{A 5 ,- 


•-,A } 


so that the roots of the characteristic polynomial, 








Po(s) = 


s 6 + A 5 s 5 + A 4 s 4 + --- + Aif 


i + A , 


(27) 



are located deep into the left half of the complex plane, it follows from the bounded input, 
bounded output stability theory that the trajectories of the reconstruction error e and those of 
its time derivatives e^\ j = 1, 2, ... are uniformly ultimately bounded by a disk, centered at the 
origin in the reconstruction error phase space, whose radius can be made arbitrarily small as 
the roots of p (s) are pushed further to the left of the complex plane. 
The closed loop tracking error dynamics satisfies 

4 3) + ^4 2) + My + K e y = £(*) - z ls (28) 

The difference, £(t) — Z\ s , being arbitrarily small after some time, produces a reference 
trajectory tracking error, e y = y — y*(t), that also asymptotically exponentially converges 
towards a small vicinity of the origin of the tracking error phase space. 

The characteristic polynomial of the predominant linear component of the closed loop system 
may be set to have poles placed in the left half of the complex plane at moderate locations 

p c (S) = S 3 + K 2 S 2 + KiS + K (29) 



464 Robust Control, Theory and Applications 

3.4 Simulation results 

3.4.1 A desired rest-to-rest maneuver 

It is desired to smoothly lower the load angle, \)\ = X\, from an equilibrium value of y = 1 
[rad] towards a smaller value, say, y = 0.6 [rad] in a reasonable amount of time, say, T = 5 [s], 
starting at t = 5 [s] of an equilibrium operation characterized by (see Bazanella et ah Bazanella 
et al. (1999) and Pai Pai (1989)) 

x 1 = 1, x 2 = 0, x 3 = 0.8912 

We used the following parameter values for the system 

h = 1, b 2 = 21.3360, b 3 = 20 
We set the external perturbation input, £,(t ), as the time signal, 

£(0 = 0.005^ sin2 ^) cos ^))cos(0.30 

The observer parameters were set in accordance with the following desired characteristic 
polynomial p (s) for the, predominantly, linear reconstruction error dynamics. We set p (s) = 

(s 2 + 2l, cv no s + o; 2 ) 3 , with 

£ = 1, cv no = 20 

The controller gains K2,K\,kq were set so that the following closed loop characteristic 
polynomial, p c (s), was enforced on the tracking error dynamics, 

p c (s) = (s 2 + 2l, c cv nc s + cvl c )(s + p c ) 
with 

p c = 3, COnc = 3/ £c = 1 

The trajectory for the load angle, y*(t), was set to be 

y* 0) = *l,initial + (p(t, h, h)) (^l,final ~ ^initial) 

with p{t,t\,t2) being a smooth Bezier polynomial achieving a smooth rest-to-rest trajectory 
for the nominal load angle y* (t) from the initial equilibrium value y* (t\) = x linitia i = 1 [rad] 
towards the final desired equilibrium value y*(^) — ^i,final — ^.6 [rad]. We set t\ = 5.0 [s], 
t 2 = 10.0 [s]; e = 3.0 
The interpolating polynomial p(t, t\, t 2 ), is of the form: 

p(t) = T 8 fi — f2T + f3T 2 — r 4 T 3 + f5T 4 



-r 6 r 5 + r 7 T 6 - r 8 r 7 + r 9 r^ 



with, 



The choice, 



h-h 



r x = 12870, r 2 = 91520, r 3 = 288288 
r 4 = 524160, r 5 = 600600, r 6 = 443520 
r 7 = 205920, r 8 = 54912, r 9 = 6435 



Robust Linear Control of Nonlinear Flat Systems 



465 



1.5 

1 

0.5 



-0.5 



Xl{t) 


V 




x 3 (t) 




x 2 (t) 









V(t),y*(t) 


1 




0.8 
0.6 







10 15 



10 15 



0.02 



0.01 



-0.01 




10 15 




10 15 



Fig. 1. Performance of GPI observer based linear controller for load angle rest-to-rest 
trajectory tracking in a perturbed synchronous generator. 



renders a time polynomial which is guaranteed to have enough derivatives being zero, both, 
at the beginning and at the end of the desired rest to rest maneuver. 

Figure 1 depicts the closed loop performance of the proposed GPI observer based linear 
output feedback controller for the forced evolution of the synchronous generator load angle 
trajectory following a desired rest to rest maneuver. 

3.4.2 Robustness with respect to controller gain mismatches 

We simulated the behavior of the closed loop system when the gain parameters product, b^b^, 
is not precisely known and the controller is implemented with an estimated (guessed) value 
of this product, denoted by ^3/ an d set to be ^3 = ^2^3- We determined that k is a positive 
factor ranging in the interval [0.95, oo] . However, if we allow independent estimates of the 
parameters in the form b-i = k^ 2 ^2 an d b$ = K^b^, we found that a larger robustness interval 
of mismatches is allowed by satisfying the empirical relation k^k^ > 0.95. The assessment 



466 



Robust Control, Theory and Applications 



1.2 




1 


1.1 


Xi(t) 


0.8 


1 




0.6 




0.4 


0.9 
0.8 




0.2 




10 





Fig. 2. Performance of GPI observer based controller under a sudden loss of power at t=2 
[sec] during 0.2 [sec]. 

was made in terms of the proposed rest to rest maneuver and possible simulations look about 
the same. 

3.4.3 Robustness with respect to sudden power failures 

We simulated an un-modeled sudden three phase short circuit occurring at time t = 2 [s]. The 
power failure lasts for t = 0.2 [s]. Figure 3 depicts the performance of the GPI observer based 
controller in the rapid transient occurring during the recovery of the prevailing equilibrium 
conditions. 



4. Controlling the non-holonomic car 

Controlling non-holonomic mobile robots has been an active topic of research during the 
past three decades due to the wide variety of applications. Several methods have been 
proposed, and applied, to solve the regulation and trajectory tracking tasks in mobile robots. 
These methods range from sliding mode techniques Aguilar et al. (1997), Wang et al. (2009), 



Robust Linear Control of Nonlinear Flat Systems 467 

Yang & Kim (1999), backstepping Hou et al. (2009), neural networks approaches (see Peng 
et al. (2007) and references therein), linearization techniques Kim & Oh (1999), and classical 
control approaches (see Sugisaka & Hazry (2007)) among many other possibilities. A classical 
contribution to this area is given in the work of Canudas de Wit Wit & Sordalen (1992). An 
excellent book, dealing with some appropriate control techniques for this class of systems, is 
that of Dixon et al. Dixon et al. (2001). A useful approach to control non-holonomic mechanical 
systems is based on linear time-varying control schemes (see Pomet (1992); Tian & Cao (2007)). 
In the pioneering work of Samson Samson (1991), smooth feedback controls (depending on 
an exogeneous time variable) are proposed to stabilize a wheeled cart. 

It has been shown that some mobile robotic systems are differentially flat when slippage is 
not allowed in the model ( see Leroquais & d' Andrea Novel (1999)). The differential flatness 
property allows a complete parametrization of all system variables in terms of the flat outputs 
an a and a finite number of their time derivatives. Flat outputs constitute a limited set of 
special, differentially independent, output variables. The reader is referred to the work of 
Fliess et al. Fliess et al. (1995) for the original introduction of the idea in the control systems 
literature. 

From the flatness of the non-holonomic car system, it is possible to reduce the control task 
to that of a linearizable, extended, multivariable input-output system. The linearization of 
the flat output dynamics requires the cancelation of the nonlinear input gain matrix, which 
depends only on the cartesian velocities of the car. To obtain this set of noisy unmeasured state 
variables, we propose linear Generalized Proportional Integral (GPI) observers consisting 
of linear, high gain Luenberger-like observers Luenberger (1971) exhibiting an internal 
polynomial model for the measured signal. These GPI observers, introduced in Sira-Ramirez 
& Feliu-Battle (2010), can provide accurate, filtered, time derivatives of the injected output 
signals via an appropriate iterated integral estimation error injection (see also Cortes-Romero 
et al. (2009)). Since high-gain observers are known to be sensitive to noisy measurements, the 
iterated integral injection error achieves a desirable low pass filtering effect. 
The idealized model of a single axis two wheeled vehicle is depicted in figure 3. The axis is of 
length L and each wheel of radius R is powered by a direct current motor yielding variable 
angular speeds co\, CV2 respectively. The position variables are (x\, X2) and 6 is the orientation 
angle of the robot. The linear velocities of the points of contact of the wheels respect to the 
ground are given by v\ = coiR and vi = o^- m this case, the only measurable variables are 
X\, X2- This system is subject to non-holonomic restrictions. 
The kinematic model of the system is stated as follows 



(30) 




where: 



jR/2 R/2 

-R/L R/L 



cv 1 

0J2 



The control objective is stated as follows: given a desired trajectory (x\{t) , x^it)) , devise 
feedback control laws, U\, ui, such that the flat output coordinates, [x\,X2), perform an 
asymptotic tracking while rejecting the un-modeled additive disturbances. 



468 



Robust Control, Theory and Applications 



x 2 * 



v\ 



R //v/ 




X! 



Fig. 3. The one axis car 



4.1 Controller design 

System (30) is differentially flat, with flat outputs given by the pair of coordinates: (x\,X2), 
which describes the position of the rear axis middle point. Indeed the rest of the system 
variables, including the inputs are differentially parameterized as follows: 



6 = arctan 






x\ + x\, ui ■ 



X 2 X\ — XiX\ 

X-l I Jir\ 



Note that the relation between the inputs and the flat outputs highest derivatives is not 
invertible due to an ill defined relative degree. To overcome this obstacle to feedback 
linearization, we introduce, as an extended auxiliary control input, the time derivative of u\. 
We have: 



u\ 



X\X\ + x 2 x 2 



s 



+ x$ 



This control input extension yields now an invertible control input-to-flat outputs highest 
derivatives relat